Unable to do SCP - scp

hqcomms#BWANFW01> scp export log traffic query "vsys eq vsys11" start-time equal 2014/04/24#00:00:01 end-time equal 2014/05/08#23:59:59 to FirewallLogs#apst:/inbox/traffic
Password Authentication
Password:
exec request failed on channel 0
I am trying to export Firewall logs and it is giving me above error.
Both client and server are Linux/Unix based server.
Can anyone help?

SCP syntax goes: scp [[user#]host1:]file1 ... [[user#]host2:]file2. Your # symbols will definitely confuse scp, but I'm not sure that's even what you're trying to do. Run your export first and then export the resulting file.

Related

SSH - what is the meaning of permission denied(publickey, password)?

sorry if the question may be vague or not but I noticed that whenever I have tried to login to a ssh server it usually says "permission denied(publickey, password" or "permission denied(publickey, password,x, y)" where x and y are other strings but do these indicate what I could use to login to the server or are these the requirements needed to login to the server?
It only lists the list of attempted authentication methods that failed at the initiation of the connection.
As described in this article from Marko Aleksic
One reason for the error may be sshd_config, the file that contains SSH server configuration.
The other possibility is that the authorized_keys file has insufficient permissions
if you have access to the server, stop the sshd service, and restart it manually in debug mode:
sudo /usr/sbin/sshd -d
That way, you will see exactly what is attempted and why it fails.

How to disable NFS client caching?

I have a trouble with NFS client file caching. The client read the file which was removed from the server many minutes before.
My two servers are both CentOS 6.5 (kernel: 2.6.32-431.el6.x86_64)
I'm using server A as the NFS server, /etc/exports is written as:
/path/folder 192.168.1.20(rw,no_root_squash,no_all_squash,sync)
And server B is used as the client, the mount options are:
nfsstat -m
/mnt/folder from 192.168.1.7:/path/folder
Flags: rw,sync,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,nosharecache,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.20,minorversion=0,lookupcache=none,local_lock=none,addr=192.168.1.7
As you can see, "lookupcache=none,noac" options are already used to disable the caching, but seems doesn't work...
I did the following steps:
Create a simple text file from server A
Print the file from the server B by cmd "cat", and it's there
Remove the file from the server A
Wait couple minutes and print the file from the server B, and it's still there!
But if I do "ls" from the server B at that time, the file is not in the output. The inconsistent state may last a few minutes.
I think I've checked all the NFS mount options...but can't find the solution.
Is there any other options I missed? Or maybe the issue is not about NFS?
Any ideas would be appreciated :)
I have tested the same steps you have given with below parameters. Its working perfectly. I have added one more parameter "fg" in the client side mounting.
sudo mount -t nfs -o fg,noac,lookupcache=none XXX.XX.XX.XX:/var/shared/ /mnt/nfs/fuse-shared/

scp error--syntax, or something else?

I am trying to use scp to copy various files from my work machine to my personal. I use the code:
scp usernamework#workcomputer:~/directory/to/file \
usernamepersonal#personalcomputer:~/Directory/to/copied/file
When I enter the code I am prompted for my work computer password. I enter the password and the error is:
could not resolve hostname(personal computer)
Is there a syntax error in my code-or is there something else going on?
If you specify two remote hosts, it will connect to the first one and from there it will connect to the second one. The second hostname is probably not resolvable/visible from the first one and therefore it fails. There are few possibilities you can do:
Connect to your personal computer and do a transfer with only one remote:
scp usernamework#workcomputer:~/directory/to/file ~/Directory/to/copied/file
Use -3 switch, which will connect to both of the ends from your current computer:
scp -3 usernamework#workcomputer:~/directory/to/file \
usernamepersonal#personalcomputer:~/Directory/to/copied/file

Ansible: SSH Error: unix_listener: too long for Unix domain socket

This is a known issue and I found a solution but it's not working for me.
First I had:
fatal: [openshift-node-compute-e50xx] => SSH Error: ControlPath too long
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
So I created a ~/.ansible.cfg. The content of it:
[ssh_connection]
control_path=%(directory)s/%%h‐%%r
But after rerunning my ansible I stil have an error about 'too long'.
fatal: [openshift-master-32axx] => SSH Error: unix_listener: "/Users/myuser/.ansible/cp/ec2-xx-xx-xx-xx.eu-central-1.compute.amazonaws.com-centos.AAZFTHkT5xXXXXXX" too long for Unix domain socket
while connecting to 52.xx.xx.xx:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
Why is it still too long?
The limit is 104 or 108 characters. (I found different statements on the web)
You XXXed out some sensitive information in the error message so it's not clear how long your path actually is.
I guess %(directory)s is replaced with the .ansible directory in your users folder. Removing that and using directly your user folder would save you 12 characters:
control_path=~/%%h‐%%r
Sure, that will spam your home directory with control sockets.
Depending on the actual length of your username, you could see if you can just create another directory or find a shorter path anywhere. For example, I use ~/.ssh/tmp/%%h_%%r
Only 3 chars less but it's enough.
Finally if none of that helps, you still could fall back using /tmp for storing the sockets. But be aware that anyone with access to /tmp on that machine might be able to use your sockets then.
Customizing the control_path solves the problem for me. Here is how to do it without spamming the home directory.
The control_path defaults to (documentation):
control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r
Edit ansible config.
vim ~/.ansible.cfg
Here are sample file contents with new control_path value:
[defaults]
inventory=/etc/ansible/hosts
[ssh_connection]
control_path=%(directory)s/%%h-%%r
control_path_dir=~/.ansible/cp
Just to add more, as the error shows this problem generally happens when the control path is too long for Unix domain socket, hence, to specific to ansible.
You can easily fix this by updating your config file to use the %C format instead of %r#%h:%p as follow:
$ mkdir ~/.ssh/control
$ vim ~/.ssh/config
Host *
ControlPath ~/.ssh/control/%C
ControlMaster auto
More Detail: man ssh_config defines the %C format as 'a hash of the concatenation: %l%h%p%r'. And refer here.
For me, Ansible config file was missing. After that it worked for me.

csp uploading file from desktop to var folder

I am trying to send an sql dump file from my desktop to my vps.
I tried this command exactly.
scp C:/users/ioi/desktop/localhost.sql root#327.25.10.15:var/www/public
It is giving me an error that says
Name or service not known
[root#vps-7174-4454 public]# scp C:/users/ioi/desktop/localhost.sql root#327.25.10.15:var/www/public/
ssh: Could not resolve hostname C: Name or service not known
I don't understand why it won't recognize the C: part. Anyone has any idea?
The reason it doesn't recognise the C: part is that scp uses a colon to separate the hostname from the filename. So scp thinks C is the hostname and it fails to resolve it.
Make sure you are on correct machine/terminal. On windows scp C:/Users/... works, but I was executing it on remote linux instead of local command line.