sqlcmd gives the connection link failure - sql

I am trying to join two tables on azure sql db from my local ubuntu machine. One of the tables has around 300M entries, so it will take some time to run a query. But whenever I run the query like this,
sqlcmd -S *server* -d *DB* -U *User* -P *Password*
-l 600 -t 600 -Q *Query* -s ',' -o *output_file*
-W -w 1000 -C -M
It gives the same error at different points whenever I run it.
This is the error I am getting,
Sqlcmd: Error: Internal error at ReadAndHandleColumnData (Reason: Error reading column data).
SSL Provider: [error:80001044:lib(128):func(1):internal error:unexpected error]
Communication link failure
At first, I thought it's a timeout issue, so I increased query timeout and server timeout to 10 minutes. But it doesn't wait for 10 minutes, it throws the error before that. Can someone help please?

I just got the same error message, but when I ran the query a second time, it worked...
¯\_(ツ)_/¯
Seems to be an internal SQL Server/sqlcmd issue over which users have very little control.

Related

Impala Connection error

I am trying to run the below impala command in my cloudera cluster
impala-shell -i connect 10.223.121.11:21000 -d prod_db -f /home/cloudera/views/a.hql
but I get error as
Error, could not parse arguments "10.223.121.11:21000"
Could some one help me on this?
Flag -i should be defined as: -i hostname or --impalad=hostname (without connect)
command connect should be used within impala-shell Connecting to impalad through impala-shell
The default port of 21000 is assumed unless you provide another value.
So this should works:
impala-shell --impalad=10.223.121.11 -d prod_db -f /home/cloudera/views/a.hql
In own scenario, I was connected with the impala-shell but suddenly I got [Not connected] > . Trying to reconnect failed & I didn't want to restart my machine (which is another option).
And Trying this:
[Not connected] > CONNECT myhostname
did not help either
I realized that my IP did change.
By just adjusting my IP from dynamic to static fixed it.

Running Sudo inside SSH with << heredoc

I' m sure you will find the question similar to many other posts on stackoverflow or on internet. However, I could not find the solution to my problem precisely. I have list of task to be run on remote server, and passing the script is OK! however does not suit to the requirement.
I' m running following from my server to connect to remote server;
ssh -t user#server << 'HERE'
sudo su - <diff_user>
do task as diff_user
HERE
ssh -tt user#server << 'HERE'
sudo su - <diff_user>
do task as diff_user
HERE
With first option (-t), I' m still not able to do sudo, it says below;
sudo: sorry, you must have a tty to run sudo
With second option above (-tt), I' m getting reverse input/output to current server session, total mess.
I tried passing the content as an script to SSH to run on remote host, however, getting similar results.
Is there a way other than commenting out below?
Defaults requiretty in /etc/sudoers file
I have not tried above though, I know RedHat approved it to be removed/ commented out in future version, whenever that is. If I go with step, I will have get above done in 100's of VM's (moreover, I dont have permission to edit the file on VM's and give it a try).
Bug 1020147
Hence, my issue remains the same, as before. It would be great if I can get some input from experts here :)
Addition Info : Using RedHat RHEL 6, 2.6.32-573.3.1
I do have access to the remote host and once I' m in, my ID does not require password to switch to diff_user.
When you are asking this way, I guess you don't have passwordless sudo.
You can't communicate with the remote process (sudo), when you put the script on stdin.
You should rather use the ssh and su command:
ssh -t user#server "sudo su - <diff_user> -c do task as diff_user"
but it might not work. Interactive session can be initiated using expect (a lot of questions around here).
I was trying to connect to another machine in an automated fashion and check some logs only accessible to root/sudo.
This was done by passing the password, server, user, etc. in a file — I know this is not safe and neither a good practice, but this is the way it will be done in my company.
I have several problems:
tcgetattr: Inappropriate ioctl for device;
tty related problems that I don't remember exactly;
sudo: sorry, you must have a tty to run sudo, etc..
Here is the code that worked for me:
#!/bin/bash
function checkLog(){
FILE=$1
readarray -t LINES < "$FILE"
machine=${LINES[4]}
user=${LINES[5]}
password=${LINES[6]}
fileName=${LINES[7]}
numberOfLines=${LINES[8]}
IFS='' read -r -d '' SSH_COMMAND <<EOT
sudo -S <<< '$password' tail $fileName -n $numberOfLines
EOT
RESULTS=$(sshpass -p $password ssh -tt $user#$machine "${SSH_COMMAND}")
echo "$RESULTS"
}
checkLog $1

Docker: SSH freezes on login

I can succefully login to server using ssh 111.111.111.111 without password. But after multiple ssh login, I can't access server for some while(it freezes when I try to login).
To tell the whole story I'm trying to create generic docker machine using following lines.
docker-machine create\
--driver generic\
--generic-ip-address=111.111.111.111\
srv
All of the errors are ssh related, and they are quite randomly at different stages:
Error getting SSH command: Something went wrong running an SSH command!
command : cat /etc/os-release
err : exit status 255
output :
or:
if ! type docker; then curl -sSL https://get.docker.com | sh -; fi
SSH cmd err, output: exit status 255:
error installing docker:
After any of these error I can't login for somewhile. Please let me know if any log or confs is needed.
Since docker executes procedures via separate ssh command, somehow my provider detected me as an intruder for brute force attacks, changing the ssh port on remote server solved the problem.
Please view another question that I asked at
SSH parallel command execution freeze for this matter.

Rsync over SSH - timeout in ssh or rsync?

I'm dealing with a crappy ISP that resets my WAN connection at random points while my script is running. I want the transfer to survive this reset and go on. I manually launch this script vs using cron / launchd currently.
I have a fairly basic script as shown below:
rsync -rltv --progress --partial -e "ssh -i <key> -o ConnectTimeout=300" <remotedir> <localdir>
Am I better off putting the timeout in the rsync section instead?
For example:
rsync -rltv --progress--partial --timeout=300 -e "ssh -i <key>" <remotedir> <localdir>
Thanks!
ConnectTimeout only applies when SSH is trying to establish the connection with the server, it doesn't have anything to do with timeouts during the data transfer. So you need to use the --timeout option to do what you want.
Try re-running the rsync. Also try without the ssh option. The job failed probably due to losing your network connection. I have an rsync job copying files between datacenters running every 2 hours via cron and it will fail about once per day.

SQL Server "Shared Memory provider" Error

I am running the following command in a batch file:
osql -S dbname -U username -P password -i C:\inputSQL.sql -o C:\postMigration.log -n
The dbname, username, and password have all been set correctly.
However , when I run the batch file I get this output to the "C:\postMigration.log" log
[SQL Native Client]Shared Memory Provider: No process is on the other
end of the pipe.
[SQL Native Client]Communication link failure
My question is: what can cause SQL Server 2005 to throw this error? Is it a login issue?
Thanks!
Looks like you miss -H (host) param. Also note -o would work relative server's disk C: