I'd like to use restic to do cloud backups.
I have a lot of GBs on pCloud, and I want to backup there. I work on a macOS laptop (with GUI, that is). I also have pCloud client installed on this laptop.
Question: restics's docs say about connection to pCloud using rclone. However, I have already pCloud client and pCloud Drive folder is mounted.
Does rclone provide any advantages over direct use of pCloud Drive folder for backups?
Thanks!
Related
Dropbox again banned my public folder because we exceed the daily limit. That is very stressing for us. So I'm looking for other options to share our media files with our users.
Our site is hosted in a Digital Ocean droplet: 2 GB Memory / 40 GB Disk / SFO1 - Ubuntu LEMP on 14.04
Our media files are in a folder in our Dropbox Pro account.
There is some way to cut/copy the files from our Dropbox account and paste it to our Digital Ocean account?
Thanks in advance!!
Not sure where exactly you want to migrate, but these link should be useful for you (or someone else in the future):
Dropbox client for Linux - this is a tutorial on how to use Dropbox client with DigitalOcean and sync files between server and Dropbox.
Mount DigitalOcean Spaces instance - this tutorial will allow you to mount your DO Spaces storage to your DO Droplet using s3fs.
Configure backups to DigitalOcean Spaces - this one describes how to configure s3cmd to exchange your files between server and Spaces.
You could use info above to e.g. download your entire Dropbox data using Dropbox client, then create Spaces instance. Next you would mount Spaces with s3fs and just move data "inside" your Droplet onto the newly mounted fs. Or use another server for download and upload to Spaces with s3cmd (if network speed and HDD space is your constraint on primary server). Of course, it might be just enough to download data with Dropbox client and keep it on your server without external Spaces, if your HDD/SSD is big enough.
This link might be useful as well when writing your own scripts, these ones help to automate backup to Dropbox using "on the fly" DO instances. I haven't tried this though.
Given the popularity of hosting static sites from AWS S3 buckets it would be great to be able to do that from Cloud9 too.
Is there any way I can set up an FTP-based workspace that uses an S3 bucket as the source?
Transmit and other FTP apps have the ability to work directly with an S3 bucket. I did try setting up an FTP workspace in Cloud9 using the following:
Host: s3.amazonaws.com
Username: My-Access-Key
Password: My-Secret-Key
I know it was a long-shot and I have since read confirmation that Amazon doesn't allow simple FTP access to buckets like that.
Any ideas if this is possible?
FTP workspaces on Cloud9 are actually being phased out, so I'd recommend using the mounting feature described in this blog post to mount an FTP source: https://c9.io/site/blog/2014/12/ftp-sftp-mounting-beta
Unfortunately, S3 doesn't support the FTP protocol, so this would have to be a new feature. Luckily we're opening up our SDK to be able to implement features like this. If you're interested in contributing please email us via https://support.c9.io
Codeanywhere (https://codeanywhere.com) does this now. However, you'll have to shell out $7 to $10/m for that capability.
But then again, like Cloud9 (which I'm a big fan of), you get a bunch of features on the Codeanywhere IDE.
I was disappointed when Cloud9 discontinued its efforts on S/FTP. Codeanywhere seems to be taking on the cloud/storage issue head on by handling cloud access to S3, FTP, SFTP, Google Drive and others.
I am new to Virtual Machines and CLI so please bear with me.
I have a CentOS 6.5 running on Compute Engine.
I ran yum update (without creating a snapshot of the previous disk - Yes I am an idiot) and not I cannot connect to the machine using the ip address.
I tried the following steps.
Tried to connect through Filezilla - didn't work.
Tried through Putty - didn't work
Tried through the browser option given by the CE console - didn't work.
I even tried creating a snapshot and starting up another VM with the snapshot - didn't work.
If anyone knows how I can get the files and folders out from the previous disk, I can start up a new VM and transfer everything again.
I do not have the latest database and this is important.
Please help!
Thanks
Warren
The way to recover is to delete your VM without deleting the disk, then create another VM with its own boot disk, attach and mount the original disk, and recover any data that you need from it.
First things first: on the VM instances page, click on the instance name that is currently running with that disk, and uncheck the box "Delete boot disk when instance is deleted". Then delete the instance.
Now, create a new instance with its own boot disk. To differentiate this new disk from the original boot disk:
using a different OS (or version of the OS) for the new disk, e.g., if using Ubuntu, try a different version or use Debian; if using RHEL, try CentOS, or vice versa
see which one is mounted at / — this should be the new disk
Mount the original disk as read-only and recover any information you need. Once you have a backup of your data, you can remount it with read-write access and try to fix it (but back up the data first!).
I finally solved this problem thanks to Misha for sending me in the right direction.
The steps are below for anyone who has the same issue.
Problem:
While updating the Centos server using yum update, I was unable to connect back to the server.
I tried all possible combinations but no luck. This seems to be a known issue as there was some material on the Compute Engine site regarding this.
Solution:
I followed the steps as Misha suggested. I started up another VM with its own boot disk and then attached the original disk with read write access.
Note: I was unable to mount the disk as just read only.
The commands were
mkdir /mnt/sdb1
mount /dev/sdb1 /mnt/sdb1
Once I mounted the VM, I copied the files from the html folder in the sdb1 disk to the html folder in the sda1(the new boot disk).
The database was a bit more challenging.
I tried quite a few times but copying the files from /dev/sdb1/var/lib/mysql into the new disk mysql folder was not working.
I found some tutorials but nothing helped.
Finally I downloaded the files from within the /dev/sdb1/var/lib/mysql and put them in my local windows mysql installation within the data folder.
Remember you have to download everything which includes the ib_logfile0 , ib_logfile1 and ibdata1 including the folder which has the *.frm files.
Then I opened localhost/phpmyadmin and voila... the files were there.
The rest was pretty simple... Exporting and uploading the SQL scripts back to the server.
This took me about 12 hours to figure out.
Thanks again Misha.
I have a git repository that stores audio files.
Obviously, it's not the best usage of git, and the repo has become quite large.
As an alternative, I would like to be able to manipulate these audio files at the command line, "commiting" when some work is done.
Is this type of context possible with manipulating Amazon S3 files at the command line?
Or do you scp, for example, files to S3?
There are some rsync tools to S3 that may work for you, here is an example which I have not tried: http://www.s3rsync.com/
How important are the older versions of the audio? Amazon S3 buckets can have 'versioning' turned on, and you get full versioning support. You pay full $ for each version - I don't know if you have 10 GB or 10TB to store, and your budget, etc... The amazon versioning is nice, but there are not a lot of tools that fully support it.
To manipulate S3 files you will first have to download it and then upload it when you are done, this is relatively simple to do.
However, if the amount of files you have is truly large, the slow transfer rate and bandwidth charge will kill you. If you don't have that much files, DropBox is built on top of S3 and have syncing and a rudimentary version control, bandwidth is not charged..
I felt like using a good networked storage system and git on your LAN is still the better idea.
I have a fairly large amount of data (~30G, split into ~100 files) I'd like to transfer between S3 and EC2: when I fire up the EC2 instances I'd like to copy the data from S3 to EC2 local disks as quickly as I can, and when I'm done processing I'd like to copy the results back to S3.
I'm looking for a tool that'll do a fast / parallel copy of the data back and forth. I have several scripts hacked up, including one that does a decent job, so I'm not looking for pointers to basic libraries; I'm looking for something fast and reliable.
Unfortunately, Adam's suggestion won't work as his understanding of EBS is wrong (although I wish he was right and often thought myself it should work that way)... as EBS has nothing to do with S3, but it will only give you an "external drive" for EC2 instances that are separate, but connectable to the instances. You still have to do copying between S3 and EC2, even though there are no data transfer costs between the two.
You didn't mention an operating system of your instance, so I cannot give tailored information. A popular command line tool I use is http://s3tools.org/s3cmd ... it is based on Python and therefore, according to info on its website it should work on Win as well as Linux, although I use it ALL the time on Linux. You could easily whip up a quick script that uses its built in "sync" command that works similar to rsync, and have it triggered every time you're done processing your data. You could also use the recursive put and get commands to get and put data only when needed.
There are graphical tools like Cloudberry Pro that have some command line options for Windows too that you can setup schedule commands. http://s3tools.org/s3cmd is probably the easiest.
By now, there is a sync command in the AWS Command line tools, that should do the trick: http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
On startup:
aws s3 sync s3://mybucket /mylocalfolder
before shutdown:
aws s3 sync /mylocalfolder s3://mybucket
Of course, the details are always fun to work out eg. how can parallel it is (and can you make it more parallel and is that any faster goven the virtual nature of the whole setup)
Btw hope you're still working on this... or somebody is. ;)
I think you might be better off using an Elastic Block Store to store your files instead of S3. An EBS is akin to a 'drive' on S3 that can be mounted into your EC2 instance without having to copy the data each time, thereby allowing you to persist your data between EC2 instances without having to write to or read from S3 each time.
http://aws.amazon.com/ebs/
Install s3cmd Package as
yum install s3cmd
or
sudo apt-get install s3cmd
depending on your OS
then copy data with this
s3cmd get s3://tecadmin/file.txt
also ls can list the files.
for more detils see this
For me the best form is:
wget http://s3.amazonaws.com/my_bucket/my_folder/my_file.ext
from PuTTy