I have extended my Time Capsule storage by adding an external disk to the TC USB port. Everything looks good except I see that /.Trashes on this disk has 241 GB of data that I don't need. How can I get rid of it?
I am comfortable using terminal and sudo, although I'm very rusty, but I have no clue of even how to access this TC external disk from terminal.
I figured out how to do it. All external disks appear under /Volumes in terminal. I cd'd to the directory under there that held my 241 GB of unwanted data and used
rm -r *
to remove all of it. Of course this "remove recursively" should be used with great caution.
Related
docker-machine has an scp command, but docker-cloud doesn't seem to have any way to transfer a file from my local machine to the cloud container or vice-versa.
I'm submitting an answer below that I've finally figured out (in hopes that it will help someone), but I'd love to hear better answers if there are any!
(I realize docker-cloud is going away, but perhaps this will be helpful for other cloud platforms as well)
To transfer a file from your local machine to a docker-cloud instance that is running linux with the tee command available:
docker-cloud container exec id12345 tee filename.ext < file_to_copy.ext > /dev/null
(you'll want to redirect output to /dev/null as shown unless you want the entire contents of the file to be echoed to the terminal... twice)
To transfer a file to your local machine, is somewhat easier:
docker-cloud container exec id12345 cat file_to_copy.ext > filename.ext
Note: I'm not sure this works for binary files, and it can even cause issues with linefeed characters in text files, based on terminal settings, etc. - but it's the best answer I've got short of using an external service like https://transfer.sh
I want to rsync a bucket with 100M files between s3 and gs. I've got a c3.8xlarge instance and did a quick dry run:
$ time gsutil -m rsync -r -n s3://s3-bucket/ gs://gs-bucket/
Building synchronization state...
At source listing 10000...
^C
real 4m11.946s
user 0m0.560s
sys 0m0.268s
About 4 minutes for 10k files. At this rate, it's going to take 27 days just to compute the sync state. Anything I can do to speed this up?
I also noticed [and fixed] the following warning:
WARNING: gsutil rsync uses hashes when modification time is not
available at
both the source and destination. Your crcmod installation isn't using the
module's C extension, so checksumming will run very slowly. If this is your
first rsync since updating gsutil, this rsync can take significantly longer than
usual. For help installing the extension, please see "gsutil help crcmod".
Are the file hashes computed or am I just waiting for listing 100M files?
When setting up a sync process between two buckets, the first iteration is going to be the slowest because it needs to copy all of the data in source-bucket to dest-bucket. For cross-provider syncs, this is further slowed down by the need for two separate connections per object -- one to pull the data from the source to the host machine, and another to funnel it through from the host to the destination (gsutil refers to this as "daisy-chain" mode).
For the initial sync (and possibly subsequent syncs as well) between buckets, you might be better off using GCS's transfer service, which allows GCS to copy the objects on your behalf. This tends to be much faster than doing all the work with one machine running gsutil.
As for the warning, it's a general warning that's printed at the beginning of the command execution if you don't have the crcmod C extension installed, regardless of what's present at the destination.
Skyplane is a much faster alternative for transferring data between clouds (up to 110x for large files). You can transfer data with the command:
# copy data
skyplane cp -r s3://aws-bucket-name/ gcs://google-bucket-name/
# sync data
skyplane sync -r s3://aws-bucket-name/ gcs://google-bucket-name/
I get this error when trying to load a table in Google BQ:
Input CSV files are not splittable and at least one of the files is
larger than the maximum allowed size. Size is: 56659381010. Max
allowed size is: 4294967296.
Is there a way to split the file using gsutil or something like that without having to upload everything again?
The largest compressed CSV file you can load into BigQuery is 4 gigabytes. GCS unfortunately does not provide a way to decompress a compressed file, nor does it provide a way to split a compressed file. GZip'd files can't be arbitrarily split up and reassembled in the way you could a tar file.
I imagine your best bet would likely be to spin up a GCE instance in the same region as your GCS bucket, download your object to that instance (which should be pretty fast, given that it's only a few dozen gigabytes), decompress the object (which will be slower), break that CSV file into a bunch of smaller ones (the linux split command is useful for this), and then upload the objects back up to GCS.
I ran into the same issue and this is how I dealt with it:
First, spin up a Google Compute Engine VM instance.
https://console.cloud.google.com/compute/instances
Then install the gsutil commands and then go through the authentication process.
https://cloud.google.com/storage/docs/gsutil_install
Once you have verified that the gcloud, gsutil, and bq commands are working then save a snapshot of the disk as snapshot-1 and then delete this VM.
On your local machine, run this command to create a new disk. This disk is used for the VM so that you have enough space to download and unzip the large file.
gcloud compute disks create disk-2017-11-30 --source-snapshot snapshot-1 --size=100GB
Again on your local machine, run this command to create a new VM instance that uses this disk. I use the --preemptible flag to save some cost.
gcloud compute instances create loader-2017-11-30 --disk name=disk-2017-11-30,boot=yes --preemptible
Now you can SSH into your instance and then run these commands on the remote machine.
First, copy the file from cloud storage to the VM
gsutil cp gs://my-bucket/2017/11/20171130.gz .
Then unzip the file. In my case, for ~4GB file, it took about 17 minutes to complete this step:
gunzip 20171130.gz
Once unzipped, you can either run the bq load command to load it into BigQuery but I found that for my file size (~70 GB unzipped), that operation would take about 4 hours. Instead, I uploaded the unzipped file back to Cloud Storage
gsutil cp 20171130 gs://am-alphahat-regional/unzipped/20171130.csv
Now that the file is back on cloud storage, you can run this command to delete the VM.
gcloud compute instances delete loader-2017-11-30
Theoretically, the associated disk should also have been deleted, but I found that the disk was still there and I needed to delete it with an additional command
gcloud compute disks delete disk-2017-11-30
Now finally, you should be able to run the bq load command or you can load the data from the console.
I have a Dlink NAS (dns-323) in RAID1 that I use to backup family photos, videos and some other data. I also manually rsync to a dedicated backup drive on a little Atom Linux box whenever we add a lot of new files to the NAS. I finally lost a drive on the NAS and through a misstep of my own, also lost the entire volume. No problem, that's what the backup drive is for. I used the same rsync command in reverse to restore files to the NAS after I replaced the bad drive and created a new RAID volume. This worked well, except that after the command finished, I noticed that it did not preserve timestamps. Timestamps were preserved in the NAS->backup direction, but not the backup->NAS direction.
I run the rsync command on the Atom Linux box with these options (this does preserve timestamps):
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dns-323 /mnt/dlink_backup --progress --verbose --itemize-changes
The reverse command to restore the volume from the backup (which did not preserve timestamps) is very similar:
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dlink_backup/dns-323/ /mnt/dns-323/ --progress --verbose --itemize-changes
which actually restores the files, but gives many errors like:
rsync: failed to set times on "/mnt/dns-323/Rich/Code/.emacs": No such file or directory (2)
I've been googling most of the afternoon and trying different things, but so far haven't solved my problem. I used the 'touch' command to successfully modify the times of one or two files on the NAS, just to prove that it can be done since I believe that is one thing that rsync must do. I've tried doing this as my user and as root. By this I mean that I've run sudo rsync ..... as well as rsync --rsync-path='/usr/bin/sudo /usr/bin/rsync' ..... where ..... is all of the previously mentioned parameters. My /etc/fstab has these entries for the NAS and the backup drive, respectively:
# the dns-323
//192.168.1.202/Volume_1 /mnt/dns-323 cifs guest,rw,uid=1000,gid=1000,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0
# the dlink_backup drive
/dev/sdb /mnt/dlink_backup ext3 defaults 0 0
It's not absolutely critical to preserve timestamps if it just plain can't be done, but it seems like it should be possible - I'm just stumped.
Thanks in advance. Let me know if I can provide any additional information.
I've earned my "tumbleweed" badge as a result of this one. pats self on back
What I've learned:
My solution:
1) Removed the left hard drive from the dns-323, which is half of the RAID1 volume.
2) Mounted (ext3) this drive using a USB-to-SATA adapter to the machine where I run rsync.
3) Performed the rsync command for the restore outlined above. I removed the --delete option which really shouldn't be there and I added the option --size-only. The size-only option made it so that timestamps were essentially the only thing that got restored, since files had already restored properly.
4) Unmounted the left drive from the Atom machine and returned that drive to the dns-323, while also removing the right drive. The right drive needs to be removed so that the dns-323 recognizes that the RAID volume is degraded.
5) Re-add the right drive to the dns-323 and tell it to rebuild the RAID volume.
6) All timestamps are now good.
A possible alternate solution:
I've read enough about rsync and NFS/Samba/cifs now to understand that this problem is likely related to permissions on the NFS server (dns-323). Internally, the user/group ids in the dns-323 are 501/501. No permutation of how I mounted the dns-323 on the Atom box would allow rsync to properly set timestamps. I do believe that changing my user account on the Atom box to have uid/gid of 501/501 would have worked, though. My user had the default 1000/1000 and root had 0/0 IIRC.
I 'm using ubuntu and windows in parallel. In my hard disk I left some space for windows and linux also. Now disk apace is full. How can I transfer some data from root to other derive without affecting any applications? plz suggest me the best approch
I'm attaching the screen shot of disk usage analyzer!
I experienced same kind of problem and could move the /home partition to some mounted device in 2-steps after logging in as root(On 'Ubuntu 12.04.1 LTS' server).
Step 1: Move /home to /mounteddevice/home
Step 2: Update /etc/passwd file replacing 'home' with 'mounteddevice/home'
And the other directory which consumes more space is /lib and /var/lib and I am searching for a proper solution to move /lib and /var/lib