I'm trying - like many other people - to rsync data between gears of a scalable app.
I have created a minutely cron file that gets executed and can get and parse a list of gears. It can then determine which ones are NOT the local gear and then tries to rsync the app-root/data folder of each gear to the $OPENSHIFT_DATA_DIR (should be on local gear)...
The first part works, but the rsync command fails with:
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
I have tried about a dozen variants of
rsync -avz -e "ssh -i ${OPENSHIFT_DATA_DIR}.ssh/id_rsa" $targetgear#app-user.rhcloud.com:app-root/data/test/ ${OPENSHIFT_DATA_DIR}test
$targetgear is a variable from parsing the list of gears - it's the long string of "stuff" in the gear name... the script loops through the csv list provided by haproxy and correctly writes what it finds to a log file, so I know it's getting at least that far...
The id_rsa file exists and has been added to the account in the web control panel.
Any help or pointers would be greatly appreciated. I have combed pages and pages of forum posts and documentation on rsync and ssh, but can't seem to get around it...
Thanks in advance.
THANK YOU RAUL!!!
Ok... The problem was the url for the target gear - I needed the long string of gunk TWICE:
$targetgear#$targetgear-user.rhcloud.com ...
and not
$targetgear#app-user.rhcloud.com
The app name prefix in the url is replaced by the gear ID.
Related
I have a Raspberry Zero connected to a SIM7600G-H 4G HAT with a camera module connected. I want to use it as a webcam, who makes a picture in a defined cycle and send's it via scp to a web server who display it on a homepage.The created shell script is started via a CRONJob every 2 hours.
The whole setup works very well if I have a good, powerful SIM connection. However, as soon as I operate the setup at the desired location, a strange behavior appears.
At the location where I run the webcam I only have a relatively poor 3G connection, if I run the scp command from a connected laptop it works fine. I can therefore assume that the problem has nothing to do with the SIM module.
The Raspian shows two peculiar behaviors.
Even though i created a key and gave it to the webserver, every now and then it wants me to enter the password when i run the scp command.This does not happen when I connect directly to the webserver via ssh.
The first few images the raspian loads without problems using scp command on the webserver, but then suddenly it does not work anymore.
I send two pictures each. I replace one with an existing one on the web server. This is the image that is displayed on the homepage and another one I put in an archive folder named after the timestamp. It looks like this:
scp foo.jpg <username>#webserver:dir/to/folder/default.jpg
FILENAME=`date +"%Y-%m-%d_%H-%M-%S"`
scp foo.jpg <username>#webserver:dir/to/archive_folder/${FILENAME}.jpg
Because of the password issue I downloaded an additional service called sshpass and added in addition to the scp command the following command:
sshpass -p <password>
However, it seems like the issue is not related to sshpass since it also happens if I try it only with the scp command and enter the password by my self.
At the end for the "new file" which goes into the archive folder, the raspian creates the filename at the web server, but he does not transmit the data of the file. At the end, the file remains empty.
The file which should be replaced "default.jpg" is not touched at all.
I tried to find out what happens via the debug output. But there is no useful information. It always stops with the line who shows the transmission state and with 0% and 0KB/s.
I have now spent several days on a solution. I have also already taken it home and everything has suddenly worked smoothly again. But as soon as I mounted it there again, the problem reappeared.
Does anyone know of a bug with the raspberry zero that it can no longer transfer scp files when the data transfer rate is low? One image is about 300kb and my laptop takes about 20 seconds to transfer over the same connection as the one from the Raspberry.
After countless attempts, my simplest solution was to set up a cronjob, which restarts the raspberry shortly before it takes a photo for the webcam. It then searches for a new network and finds it very reliably.
Given:
Connection to the Uni's secure shell like this:
me#my_computer~$ ssh <my_name>#unixyz.cs.xy.com
Password:***********
Welcome to Unixyz. You now can access a terminal on system unixyz:
my_name#unixyz~$ ls
Desktop Documents Pictures Music desired_document.pdf
my_name#unixyz-$
Taks/Question:
Getting the desired_document.pdf to my own system. I have thought of some options so far:
1)Since i can access an editor like nano I could write a C/Java programm , compile it in the home directory and make that program send the pdf. Problem with that: Had to code a client on the Uni machine and a server on my own system. On top of that I only know how to transfer text given to the stdin and no pdf's. And its obviously too much work for the given task
2) I found some vague information about commands: scp and sftp. Unfortunately, I can not figure out how it is done exactly.
The latter is basicly my questions: Are the commands scp and sftp valid options for doing the desired and how are they used?
EDIT:
I received a first answer and the problem persists: As stated, i use:
scp me# server.cs.xyz.com:/path/topdf /some/local/dir
which gives me:
/some/local/dir: no such file or directory
I'm not sure in which environment you are.
Do you use Linux or Windows as your every-day operating system?
If you are using windows, there are some ui-based scp/ssh implementations that enable you to transfer these files using an explorer based ui.
For example there is https://winscp.net/
You can indeed use scp to do exacty that, and it's easier than it might look:
scp your_username# unixyz.cs.xy.com:path/to/desired_document.pdf /some/local/dir
The key is the colon after the servername where you add your path
Optionally you can pass in the password as well, but that's bad practice, for obvious reasons.
I actually got the answer myself and the error that I was having. Both, the guy with the answer and the commentor where right. BUT:
scp must be launched when you are in YOUR terminal, I always tried to do it while I was connected to the remote server.
2 hours wasted because of that.
I know, the subject has been covered several times, but I anyhow can't make it work.
I am a very unexperienced web developer.
I created several jobs which :
1) save the sql dump into a folder on the server
2) download the content of the folder on my locale machine. (SCP)
Problem is, the way I created the 2) job, all the files which are existing in 1) are transfered to my local machine. I am looking for a way to only scp the latest created file. I tried this possibility, that several person qualify as the best answer (ls -t nlb* | head -1) but anyhow get an error message.
Any chance someone could simply correct the command ?
scp -i /users/myname/desktop/projects/online/mykey.pem ec2-user#XXXXXXXXXX.eu-west-1.compute.amazonaws.com:/var/www/dbbckp/`* | head -1` /users/myname/downloads/dbbckp
Many thanks in advance.
You can either use rsync which sync the gap only, or you can name your dump files in a proper way so you know which one to copy.
I set up Jenkins CI to deploy my PHP app to our QA Apache server and I ran into an issuse. I successfully set up the pubkey authentication from the local jenkins account to the remote apache account, but when I use rsync, I get the following error:
[jenkins#build ~]# rsync -avz -e ssh test.txt apache#site.example.com:/path/to/site
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(64) [sender=2.6.8]
[jenkins#build ~]#
One potential problem is that the remote apache account doesn't have a valid shell account, should I create a remote account with shell access and part of the "apache" group? It is not an SSH key problem, since ssh apache#site.example.com connects successfully, but quickly kicks me out since apache doesn't have a shell.
That would probably be the easiest thing to do. You will probably want to only set it up with a limited shell like rssh or scponly to only allow file transfers. You may also want to set up a chroot jail so that it can't see your whole filesystem.
I agree that that would probably be the easiest thing to do. We do something similar, but use scp instead. Something like:
scp /path/to/test.txt apache#site.example.com:/path/to/site
I know this is pretty old thread, but if somebody comes across this page in future...
I had the same problem, but got that fixed when I fixed my .bashrc .
I removed the statement "echo setting DISPLAY=$DISPLAY" which was there before in my .bashrc. rsync has issues with that statement for some reason.
So, fixing .bashrc/.cshrc/.profile errors helped me.
I have a bash file that contains wget commands to download over 100,000 files totaling around 20gb of data.
The bash file looks something like:
wget http://something.com/path/to/file.data
wget http://something.com/path/to/file2.data
wget http://something.com/path/to/file3.data
wget http://something.com/path/to/file4.data
And there are exactly 114,770 rows of this. How reliable would it be to ssh into a server I have an account on and run this? Would my ssh session time out eventually? would I have to be ssh'ed in the entire time? What if my local computer crashed/got shut down?
Also, does anyone know how many resources this would take? Am I crazy to want to do this on a shared server?
I know this is a weird question, just wondering if anyone has any ideas. Thanks!
Use
#nohup ./scriptname &>logname.log
This will ensure
The process will continue even if ssh session is interrupted
You can monitor it, as it is in action
Will also recommend, that you can have some prompt at regular intervals, will be good for log analysis. e.g. #echo "1000 files copied"
As far as resource utilisation is concerned, it entirely depends on the system and majorly on network characteristics. Theoretically you can callculate the time with just Data Size & Bandwidth. But in real life, delays, latencies, and data-losses come into picture.
So make some assuptions, do some mathematics and you'll get the answer :)
Depends on the reliability of the communication medium, hardware, ...!
You can use screen to keep it running while you disconnect from the remote computer.
You want to disconnect the script from your shell and have it run in the background (using nohup), so that it continues running when you log out.
You also want to have some kind of progress indicator, such as a log file that logs every file that was downloaded, and also all the error messages. Nohup sends stderr and stdout into files.
With such a file, you can pick up broken downloads and aborted runs later on.
Give it a test-run first with a small set of files to see if you got the command down and like the output.
I suggest you detach it from your shell with nohup.
$ nohup myLongRunningScript.sh > script.stdout 2>script.stderr &
$ exit
The script will run to completion - you don't need to be logged in throughout.
Do check for any options you can give wget to make it retry on failure.
If it is possible, generate MD5 checksums for all of the files and use it to check if they all were transferred correctly.
Start it with
nohup ./scriptname &
and you should be fine.
Also I would recommend that you log the progress so that you would be able to find out where it stopped if it does.
wget url >>logfile.log
could be enough.
To monitor progress live you could:
tail -f logfile.log
It may be worth it to look at an alternate technology, like rsync. I've used it on many projects and it works very, very well.