Jenkins Publish Over SSH Plugin on MacOS limits to 130MB and frezze - ssh

In Jenkins test project i have in Execute shell:
dd if=/dev/urandom of=ios_512MB bs=531628032 count=1
and i have checked and configured Send files or execute commands over SSH after the build runs
When i run it i see:
Started by user X
Building remotely on ios in workspace /data/workspace/test
[test] $ /bin/sh -xe /var/folders/9b/s86tztx90bb9c_73gtynfzx80000gn/T/hudson8582983867531973712.sh
+ dd if=/dev/urandom of=ios_512MB bs=531628032 count=1
1+0 records in
1+0 records out
531628032 bytes transferred in 44.384345 secs (11977828 bytes/sec)
SSH: Connecting from host [jenkins2.local]
SSH: Connecting with configuration [jenkins.builds] ...
I see on Network traffic this connection, and its stops. On sftp i have:
ls -lh
-rw------- 1 10048 10047 130M Apr 16 09:56 ios_512MB
On windows/ubuntu all works well. How to fix it?

Fixed by Execute Shell:
echo "mkdir ${short}/${date}
mkdir ${short}/${date}/${RANDSTR}
put ${WORKSPACE}${location}${n}${format} ${short}/${date}/${RANDSTR}/
put ${WORKSPACE}${location}${n}.html ${short}/${date}/${RANDSTR}/" | sftp -o StrictHostKeyChecking=no -P 2222 -i ~/.ssh/id_rsa jenkins.builds#sftp.example.com

Related

PID recv: short read in CRIU

I am receiving PID recv: short read error while using lazy pages migration with CRIU.
At the source, I run the following command:
memhog -r1000 64m
cd /tmp/dump sudo -H -E criu dump -t $(pidof memhog) -D /tmp/dump --lazy-pages --address 10.237.23.102 --port 1234 --shell-job --display-stats -vvvv -o d.log
Then, in a separate terminal on the source machine itself:
scp -r /tmp/dump/ dst:/tmp/
Now, on the destination machine I start the daemon:
cd /tmp/dump criu lazy-pages --page-server --address $(gethostip -d src) --port 1234 --display-stats -vvvvv
And finally, the restore command:
cd /tmp/dump criu restore -D /tmp/dump/ --shell-job --lazy-pages -vvvv --display-stats -o restore.log -vvvv
The error is thrown by the lazy server daemon on the destination machine.
Furthermore, it works fine for the memhog installed from numactl. However, it does not if I build it from the source.
Any suggestions for solving this will be appreciated.
::Update:: Solved. See answer
Found the issue:
I was building them separately on two different machines due to which their "build-id" was not matching. Solution: Build on one machine and then just scp it over to the other machine.

Remote ssh twice

I am trying to execute commands over ssh remotely.
It's 2x remote (2 level deep).
From my host, I ssh into target1 which is connected to target2.
I need the commands executed on target2.
There is no direct connection from host to target2.
Ex:
ssh root#target1 -t "ssh root#target2 -t "cat /usr/value""
The above command works.
ssh root#target1 -t "ssh root#target2 -t "echo 1 > /usr/value""
This command does not work. I get "No such file or directory"
You are trying to use a shell feature (redirection) instead of a command, so run that via shell (note: quoting gets tricky):
ssh -t root#target1 'ssh -t root#target2 /bin/bash -c \"echo 1 > /usr/value\"'
I suggest you study the man page for ssh_config. For instance, with this:
Host target2
User root
ProxyJump root#target1
your above command would be:
ssh -t target2 '/bin/bash -c "echo 1 > /usr/value"'
The next step is to use ansible (or similar) for system changes.

Running ansible but keep getting failed to connect via ssh

MacBook-Pro:rails1 woo$ ssh vagrant#10.0.1.92
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-91-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Tue Jul 5 03:52:20 UTC 2016
System load: 0.0 Users logged in: 1
Usage of /: 4.0% of 39.34GB IP address for eth0: 10.0.2.15
Memory usage: 32% IP address for eth1: 10.0.1.100
Swap usage: 0% IP address for eth2: 10.0.1.92
Processes: 80
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Tue Jul 5 03:52:20 2016 from 10.0.1.19
vagrant#vagrant-ubuntu-trusty-64:~$
But,
>ansible -vvvv all -m ping -u vagrant
/Library/Python/2.7/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
Using /Users/woo/vagrant_vms/rails1/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<10.0.1.92> ESTABLISH SSH CONNECTION FOR USER: vagrant
<10.0.1.92> SSH: EXEC ssh -C -vvv -o ForwardAgent=yes -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 10.0.1.92 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1467746604.02-144506913281055 `" && echo ansible-tmp-1467746604.02-144506913281055="` echo $HOME/.ansible/tmp/ansible-tmp-1467746604.02-144506913281055 `" ) && sleep 0'"'"''
10.0.1.92 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
I've done:
cat ~/.ssh/id_rsa.pub | ssh vagrant#10.0.1.92 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
and it was successful as tested by the ssh command.
I don't understand why I keep getting the Failed to connect message.
The 10.0.1.92 is in the hosts file and the ip of the vm is set to that ip.
Can you try this:
ansible -vvvv all -m ping -u vagrant
Try to issue these two commands before connecting the vagrant box with Ansible.
$ eval $(ssh-agent -s)
$ ssh-add

How to enable sshpass output to console

Using scp and interactively entering the password the file copy progress is sent to the console but there is no console output when using sshpass in a script to scp files.
$ sshpass -p [password] scp [file] root#[ip]:/[dir]
It seems sshpass is suppressing or hiding the console output of scp. Is there a way to enable the sshpass scp output to console?
After
sudo apt-get install expect
the file send-files.exp works as desired:
#!/usr/bin/expect -f
spawn scp -r $FILES $DEST
match_max 100000
expect "*?assword:*"
send -- "12345\r"
expect eof
Not exactly what was desired, but better than silence:
SSHPASS="12345" sshpass -e scp -v -r $FILES $DEST 2>&1 | grep -v debug1
Note that -e is considered a bit safer than -p.
Output:
Executing: program /usr/bin/ssh host servername, user username, command scp -v -t /src/path/dst_file.txt
OpenSSH_6.6.1, OpenSSL 1.0.1i-fips 6 Aug 2014
Authenticated to servername ([10.11.12.13]:22).
Sending file modes: C0600 590493 src_file.txt
Sink: C0600 590493 src_file.txt
Transferred: sent 594696, received 2600 bytes, in 0.1 seconds
Bytes per second: sent 8920671.8, received 39001.0
In this way:
output=$(sshpass -p $PASSWD scp -v $filename root#192.168.8.1:/root 2>&1)
echo "Output = $output"
you redirect the console output in variable output.
Or, if you only want to see the console output of scp command, you should add only -v command in your ssh pass cmd:
sshpass -p $PASSWD scp -v $filename root#192.168.8.1:/root

Ansible script ssh error

I am creating a vm in openstack (linux vm) and launching ansible script from there.I am getting following ssh error.
---
- hosts: licproxy
user: my-user
sudo: yes
tasks:
- name: Install tinyproxy#
command: sudo apt-get install tinyproxy
- name: Update tinyproxy
command: sudo apt-get update
- name: Install bind9
shell: yes '' | sudo apt-get install bind9
Though I am directly able to ssh to machine 10.32.1.40 from the linux box in openstack admin-keydev29
PLAY [licproxy] ***********************************************************
GATHERING FACTS ***************************************************************
<10.32.1.40> ESTABLISH CONNECTION FOR USER: my-user
<10.32.1.40> REMOTE_MODULE setup
<10.32.1.40> EXEC ssh -C -tt -vvv -o StrictHostKeyChecking=no -o IdentityFile="/opt/apps/installer/tenant-dev29/ssh/admin-key-dev29" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=my-user -o ConnectTimeout=10 10.32.1.40 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238 && echo $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238'
EXEC previous known host file not found for 10.32.1.40
fatal: [10.32.1.40] => SSH Error: ssh: connect to host 10.32.1.40 port 22: Connection refused
while connecting to 10.32.1.40:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [Install tinyproxy] *****************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
I removed from known_host entry and ran the script again it is still showing me same message.
UPDATE
I observed manual ssh is working fine.but ansible script is giving ssh error.
I logged in to the newly created vm using ssh key and checked /var/log/auth.log file
Dec 30 13:00:33 licproxy-vm sshd[1184]: Server listening on :: port 22.
Dec 30 13:01:10 licproxy-vm sshd[1448]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
Dec 30 13:01:10 licproxy-vm sshd[1448]: Connection closed by 192.168.0.106 [preauth]
Dec 30 13:01:32 licproxy-vm sshd[1450]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
The vm has sshd version OpenSSH_6.6.1 version
I checked /etc/ssh folder i found ssh_host_ed25519_key and ssh_host_ed25519_key.pub missing
I created those file using command ssh-keygen -A.
Now I want to know why these files are missing from ssh folder.Is this a bug?
Problem was because of ssh port 22.The port was not up.
I added the following code.which basically wait for ssh port to come up.
while ! nc -z $PROXY_SERVER_IP 22; do
sleep 10s
done