How to transfer files from a remote server to my Amazon S3 instance? - amazon-s3

I have about 15 gigs of data in 5 files that I need to transfer to an Amazon S3 bucket, they are currently hosted on a remote server that I have no scripting or shell access to - I can only download them VIA an httpd link.
How can I transfer these files to my Amazon S3 bucket without first having to download them to my local machine then re-upload them to S3?

If you want to automate the process, use AWS SDK.
Like in following case use AWS PHP SDK:
use Aws\Common\Aws;
$aws = Aws::factory('/path/to/your/config.php');
$s3 = $aws->get('S3');
$s3->putObject(array(
'Bucket' => 'your-bucket-name',
'Key' => 'your-object-key',
'SourceFile' => '/path/to/your/file.ext'
));
More details:
http://blogs.aws.amazon.com/php/post/Tx9BDFNDYYU4VF/Transferring-Files-To-and-From-Amazon-S3
http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-s3.html

Given that you only have 5 files, use the S3 file uploader http://console.aws.amazon.com/s3/home?region=us-east-1 (Actions, Upload) after you have downloaded the files to some intermediate machine. EC2 running Windows might be the best solution as the upload to S3 would be very fast. You can download Chrome onto your EC2 instance from chrome.google.com, or use the existing web browser (IE) to do the job.

[1] SSH with keys
sh-keygen -f ~/.ssh/id_rsa -q -P ""
cat ~/.ssh/id_rsa.pub
Place this SSH key into your ~/.ssh/authorized_keys file
mkdir ~/.ssh
chmod 0700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 0644 ~/.ssh/authorized_keys
[2] Snapshot ZFS, minimize transfer with LZMA, send with RSYNC
zfs snapshot zroot#150404-SNAPSHOT-ZROOT
zfs list -t snapshot
Compress to file with lzma (more effective than bzip2)
zfs send zroot#150404-SNAPSHOT-ZROOT | lzma -9 > /tmp/snapshots/zroot#150404-SNAPSHOT-ZROOT.lzma
rsync -avz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --progress --partial /tmp/snapshots/zroot#150404-SNAPSHOT-ZROOT.lzma <username>#<ip-address>:/
[3] Speedup transfer with MBUFFER, send with ZFS Send/Recieve
Start the receiver first. This listens on port 9090, has a 1GB buffer, and uses 128kb chunks (same as zfs):
mbuffer -s 128k -m 1G -I 9090 | zfs receive zremote
Now we send the data, also sending it through mbuffer:
zfs send -i zroot#150404-SNAPSHOT-ZROOT zremote#150404-SNAPSHOT-ZROOT | mbuffer -s 128k -m 1G -O <ip-address>:9090
[4] Speedup transfer by only sending diff
zfs snapshot zroot#150404-SNAPSHOT-ZROOT
zfs snapshot zroot#150405-SNAPSHOT-ZROOT [e.g. one day later]
zfs send -i zroot#150404-SNAPSHOT-ZROOT zroot#150405-SNAPSHOT-ZROOT | zfs receive zremote/data
See also my notes

Related

How do I get to my spinnaker dashboard after Installing minnaker on my aws ec2

I installed spinnaker on my AWS EC2, login into the dashboard in the first time but immediately after I logout and login again using the same base URL i am being directed to a different person github account, what might have happened, does it mean my account is hacked or what, somebody advise please.
Being directed to the link attached below, instead of the ip address taking me to the spinnaker dashboard and yet I am using the correct base address
These are the instructions i follow for Minnaker on EC2 (ap-southeast-2)
Pre-requisites
Obtain an AWS Elastic IP
From AWS EC2 console choose a Region preferably ap-southeast-2 and
launch an EC2 instance with 16 GB memory, 4 cpu min and 60 GB disk.
An initial deployment can be performed using instance= m4.xlarge
Attach the AWS Elastic IP to the Spinnaker Instance
Access the instance through SSH
Get minnaker
curl -LO https://github.com/armory/minnaker/releases/latest/download/minnaker.tgz
Untar
tar -xzvf minnaker.tgz
Go to minnaker directory
cd minnaker
Use the Public IP value from The Elastic IP as the $PUBLIC_IP
Obtain Private IP of the instance hostname -I and add them to local environment variables $PRIVATE_IP
export PRIVATE_IP=$(hostname -I)
export PUBLIC_IP=AWS_ELASTIC_IP
Execute the command below to install Open Source Spinnaker
./scripts/install.sh -o -P $PRIVATE_IP
Validate installation
UI
Validate installation going to generated URL https://PUBLIC_IP
Use user admin and get the password at etc/spinnaker/.hal/.secret/spinnaker_password
The UI should load
Kubernetes Deployment
Minnaker is deployed inside an EC2 as a lightweight Kubernetes K3S cluster
Run kubectl version
Get info from cluster kubectl cluster-info
Tweak bash completion and enable a simple alias.
kubectl completion bash
kubectl completion bash
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
echo 'alias k=kubectl' >>~/.bashrc
`echo 'complete -F __start_kubectl k' >>~/.bashrc
Validate Spinnaker is running
k -n spinnaker get pods -o wide
Halyard Config
Validate a default halyard config is been set up
sudo chmod 755 /usr/local/bin/hal
#!/bin/bash
set -x
HALYARD=$(kubectl -n spinnaker get pod -l app=halyard -oname | cut -d'/' -f 2)
k -n spinnaker exec -it ${HAYLYARD} -- hal $# config
Minnaker repo
Clone the repository
Go to Scripts directory cd minnaker/scripts
Add permissions to the installation script chmod 775 all.sh
git clone https://github.com/armory/minnaker
References
armory/minnaker

s3fs mount storage in Cloud (NOT amazon's)

I am new to s3, object storage and linux.
I have a CentOS linux server and an active subscription to public cloud for object storage. Now I want to connect and mount the cloud object storage to the linux server.
I have installed fuse s3fs to the linux server, I have added the access and secret access key to the password file..but I am missing where I should set the endpoint. I know that by default it points to amazon's s3 but I want to use a different service and I cannot find where I should set the endpoint. Any help would be appreciated..thank you!
you can try running:
s3fs <_bucket_name_> <_local_mount_point_> -o nocopyapi -o
use_path_request_style -o nomultipart -o sigv2 -o
url=http://s3-api.sjc-us-geo.objectstorage.softlayer.net/ -d -f -o curldbg -of2 -o allow_other -o passwd_file=/etc/ibmcos.pwd
if that doesn't work for you, find out your endpoint url from the portal or look here:
https://ibm-public-cos.github.io/crs-docs/endpoints

Best way to copy files from Docker volume on remote server to local host?

I've got,
My laptop
A remote server I can SSH into which has a Docker volume inside of which are some files I'd like to copy to my laptop.
What is the best way to copy these files over? Bonus points for using things like rsync, etc.. which are fast / can resume / show me progress and not writing any temporary files.
Note: my user on the remote server does not have permission to just scp the data straight out of the volume mount in /var/lib/docker, although I can run any containers on there.
Having this problem, I created dvsync which uses ngrok to establish a tunnel that is being used by rsync to copy data even if the machine is in a private VPC. To use it, you first start the dvsync-server locally, pointing it at the source directory:
$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
--mount source=MY_DIRECTORY,target=/data,readonly \
quay.io/suda/dvsync-server
Note, you need the NGROK_AUTHTOKEN which can be obtained from ngrok dashboard. Then start the dvsync-client on the target machine:
docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
--mount source=MY_TARGET_VOLUME,target=/data \
quay.io/suda/dvsync-client
The DVSYNC_TOKEN can be found in dvsync-server output and it's a base64 encoded private key and tunnel info. Once the data has been copied, the client wil exit.
I'm not sure about the best way of doing so, but if I were you I would run a container sharing the same volume (in read-only -- as it seems you just want to download the files within the volume) and download theses.
This container could be running rsync as you wish.

Cannot ssh into remote machine after rsync

I followed this page on Protecting the Docker daemon Socket with HTTPS to generate ca.pem, server-key.pem, server-cert.pem, key.pem and key-cert.pem
I wanted a remote Docker daemon to use those keys so i used rsync via ssh to send three of the files(ca.pem, server-key.pem and key.pem) to the remote host's home directory. The identity file for ssh into the remote host is called dl-datatest-internal.pem
ubuntu#ip-10-3-1-174:~$ rsync -avz -progress -e "ssh -i dl-datatest-internal.pem" dockerCer/ core#10.3.1.181:~/
sending incremental file list
./
ca.pem
server-cert.pem
server-key.pem
sent 3,410 bytes received 79 bytes 6,978.00 bytes/sec
total size is 4,242 speedup is 1.22
The remote host stopped recognising the identity file ever since and started asking for a non-existent password.
ubuntu#ip-10-3-1-174:~$ ssh -i dl-datatest-internal.pem core#10.3.1.151
core#10.3.1.151's password:
Does anyone know why and how to fix it? I still have all the keys if that helps.
There are a couple things about the rsync command that bother me, but, I can't put my finger on the problem (if there is one).
the rsync command and subsequent ssh command reference different hosts: rsync(core#10.3.1.181:~/
) and ssh to the host(core#10.3.1.151). Those are different machines, no?
the ~ in the target of the rsync command. core#10.3.1.181:~/. I am pretty sure that the ~/ references the core home directory, but, you could just get rid of the ~/ and replace that with a . (dot).
If you can reproduce the environment you did the copy in, you can add a --dry-run to the rsync command to see what it is going to do. Looking at this command I can't see it erasing the target's .ssh directory.

How to copy a directory from local machine to remote machine

I am using ssh to connect to a remote machine.
Is there a way i can copy an entire directory from a local machine to the remote machine?
I found this link to do it the other way round i.e copying from remote machine to local machine.
Easiest way is scp
scp -r /path/to/local/storage user#remote.host:/path/to/copy
rsync is best for when you want to update versions where it has been previously copied.
If that doesn't work, rerun with -v and see what the error is.
It is very easy with rsync as well:
rsync /path/to/local/storage user#remote.host:/path/to/copy
I recommend the usage of rsync over scp, because it is highly likely that you will one day need a feature that rsync offers and then you benefit from your experience with the tool.
This is worked for me
rsync -avz -e 'ssh' /path/to/local/dir user#remotehost:/path/to/remote/dir
this is if you have to used another ssh port other than 22
rsync -avzh -e 'ssh -p sshPort' /my/local/dir/ remoteUser#host:/path/to/remote/dir
this works if your remote server uses default 22 port
rsync -avzh /my/local/dir/ remoteUser#host:/path/to/remote/dir
This worked for me.
Follow this link for detailed understanding.
we can do this by using scp command for example:
scp -r /path/to/local/machine/directory user#remotehost(server IP Address):/path/to/sever/directory
In case of differnt port
By default, the SCP protocol operates on port 22 but this can be overridden by supplying the -P flag, followed by the port number for example:
scp -P 8563 -r /path/to/local/machine/directory user#remotehost(server IP Address):/path/to/sever/directory
NOTE: we use -r flag to copy directory's files/folders recursively instead of a single file.