How can I change the default location for storing Docker images in Windows? I currently have Docker installed on my C: drive, and the images are stored in the following location:
C:\Users\xxxxx\AppData\Local\Docker\wsl\data.
I want to change the default location to my D: drive. I am using WSL2 as the backend for Docker, and I have read that I can use the .wslconfig file to configure Docker. However, I am not sure how to set up the .wslconfig file to change the default image location. My WSL2 installation is located on my D: drive, which I installed from the Microsoft Store.
I'm using Docker version 20.10.21 and these are wsl specs
WSL version: 1.0.3.0
Kernel version: 5.15.79.1
WSLg version: 1.0.47
MSRDC version: 1.2.3575
Direct3D version: 1.606.4
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.22000.1335
I'm using Ubuntu distro in WSL, and Docker Desktop v.4.15.0
I tried making some changes in .wslconfig but there was no option for storage or something.
Caveats/Preface:
I've tried this and it works, but I cannot guarantee that long-term it will continue to work. There's the potential that something will break when Docker Desktop upgrades in the future.
In general I don't recommend registry hacks, but I'm not aware of another way to do this. Other than the previous caveat, this seems fairly safe.
No, there's no .wslconfig option for changing the location of a distribution.
With that in mind, here's what I did to move docker-desktop-data to the D: drive:
Create the directory. I'll use D:\wsl\docker-desktop-data as an example.
Stop Docker Desktop by right-clicking on the status bar icon and Quit Docker Desktop.
From PowerShell:
wsl --shutdown
Confirm the location (BasePath) and registry key (PSChildName) of the docker-desktop-data via:
Get-ChildItem HKCU:\Software\Microsoft\Windows\CurrentVersion\Lxss\ |
ForEach-Object {
(Get-ItemProperty $_.PSPATH)
} | Where-Object {
$_.DistributionName -eq "docker-desktop-data"
}
Move ext4.vhdx from the BasePath directory identified above to the D:\wsl\docker-desktop-data directory.
In regedit, navigate to:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Lxss
Find the subkey matching the PSChildName from above.
Modify the BasePath to point to \\?\D:\wsl\docker-desktop-data
Restart Docker Desktop
Test that your existing images are still available by running one of them.
I have a quick question on rclone.
I am trying to download data from tradestatistics.io, where it gives a sample code for downloading:
rclone sync spaces:tradestatistics/hs-rev1992-visualization hs-rev1992-visualization
My question is that how to access list of files in that source and can it be directly done with terminal?
Assuming you've already installed rclone (https://rclone.org/downloads/)
To configure rclone to see storage on S3, see https://rclone.org/s3/
Assuming spaces: is your correctly configured rclone source remote, you can list all files using terminal amnd rclone lsl command:
rclone lsl spaces:tradestatistics/hs-rev1992-visualization
where tradestatistics is the bucket and hs-rev1992-visualization is the root folder.
A more human-readable list can be done with lsf. It's not recursive, so add -R:
rclone lsf -R spaces:
More details at https://rclone.org/commands/rclone_lsl/ with info on other lists.
When I log to my S3 console I am unable to download multiple selected files (the WebUI allows downloads only when one file is selected):
https://console.aws.amazon.com/s3
Is this something that can be changed in the user policy or is it a limitation of Amazon?
It is not possible through the AWS Console web user interface.
But it's a very simple task if you install AWS CLI.
You can check the installation and configuration steps on Installing in the AWS Command Line Interface
After that you go to the command line:
aws s3 cp --recursive s3://<bucket>/<folder> <local_folder>
This will copy all the files from given S3 path to your given local path.
Selecting a bunch of files and clicking Actions->Open opened each in a browser tab, and they immediately started to download (6 at a time).
If you use AWS CLI, you can use the exclude along with --include and --recursive flags to accomplish this
aws s3 cp s3://path/to/bucket/ . --recursive --exclude "*" --include "things_you_want"
Eg.
--exclude "*" --include "*.txt"
will download all files with .txt extension. More details - https://docs.aws.amazon.com/cli/latest/reference/s3/
I believe it is a limitation of the AWS console web interface, having tried (and failed) to do this myself.
Alternatively, perhaps use a 3rd party S3 browser client such as http://s3browser.com/
If you have Visual Studio with the AWS Explorer extension installed, you can also browse to Amazon S3 (step 1), select your bucket (step 2), select al the files you want to download (step 3) and right click to download them all (step 4).
The S3 service has no meaningful limits on simultaneous downloads (easily several hundred downloads at a time are possible) and there is no policy setting related to this... but the S3 console only allows you to select one file for downloading at a time.
Once the download starts, you can start another and another, as many as your browser will let you attempt simultaneously.
In case someone is still looking for an S3 browser and downloader I have just tried Fillezilla Pro (it's a paid version). It worked great.
I created a connection to S3 with Access key and secret key set up via IAM. Connection was instant and downloading of all folders and files was fast.
Using AWS CLI, I ran all the downloads in the background using "&" and then waited on all the pids to complete. It was amazingly fast. Apparently the "aws s3 cp" knows to limit the number of concurrent connections because it only ran 100 at a time.
aws --profile $awsProfile s3 cp "$s3path" "$tofile" &
pids[${npids}]=$! ## save the spawned pid
let "npids=npids+1"
followed by
echo "waiting on $npids downloads"
for pid in ${pids[*]}; do
echo $pid
wait $pid
done
I downloaded 1500+ files (72,000 bytes) in about a minute
I wrote a simple shell script to download NOT JUST all files but also all versions of every file from a specific folder under AWS s3 bucket. Here it is & you may find it useful
# Script generates the version info file for all the
# content under a particular bucket and then parses
# the file to grab the versionId for each of the versions
# and finally generates a fully qualified http url for
# the different versioned files and use that to download
# the content.
s3region="s3.ap-south-1.amazonaws.com"
bucket="your_bucket_name"
# note the location has no forward slash at beginning or at end
location="data/that/you/want/to/download"
# file names were like ABB-quarterly-results.csv, AVANTIFEED--quarterly-results.csv
fileNamePattern="-quarterly-results.csv"
# AWS CLI command to get version info
content="$(aws s3api list-object-versions --bucket $bucket --prefix "$location/")"
#save the file locally, if you want
echo "$content" >> version-info.json
versions=$(echo "$content" | grep -ir VersionId | awk -F ":" '{gsub(/"/, "", $3);gsub(/,/, "", $3);gsub(/ /, "", $3);print $3 }')
for version in $versions
do
echo ############### $fileId ###################
#echo $version
url="https://$s3region/$bucket/$location/$fileId$fileNamePattern?versionId=$version"
echo $url
content="$(curl -s "$url")"
echo "$content" >> $fileId$fileNamePattern-$version.csv
echo ############### $i ###################
done
Also you could use the --include "filename" many times in a single command with each time including a different filename within the double quotes, e.g.
aws s3 mycommand --include "file1" --include "file2"
It will save your time rather than repeating the command to download one file at a time.
Also if you are running Windows(tm), WinSCP now allows drag and drop of a selection of multiple files. Including sub-folders.
Many enterprise workstations will have WinSCP installed for editing files on servers by means of SSH.
I am not affiliated, I simply think this was really worth doing.
In my case Aur's didn't work and if you're looking for a quick solution to download all files in a folder just using the browser, you can try entering this snippet in your dev console:
(function() {
const rows = Array.from(document.querySelectorAll('.fix-width-table tbody tr'));
const downloadButton = document.querySelector('[data-e2e-id="button-download"]');
const timeBetweenClicks = 500;
function downloadFiles(remaining) {
if (!remaining.length) {
return
}
const row = remaining[0];
row.click();
downloadButton.click();
setTimeout(() => {
downloadFiles(remaining.slice(1));
}, timeBetweenClicks)
}
downloadFiles(rows)
}())
I have done, by creating shell script using aws cli (i.e : example.sh)
#!/bin/bash
aws s3 cp s3://s3-bucket-path/example1.pdf LocalPath/Download/example1.pdf
aws s3 cp s3://s3-bucket-path/example2.pdf LocalPath/Download/example2.pdf
give executable rights to example.sh (i.e sudo chmod 777 example.sh)
then run your shell script ./example.sh
I think simplest way to download or upload files is to use aws s3 sync command. You can also use it to sync two s3 buckets in same time.
aws s3 sync <LocalPath> <S3Uri> or <S3Uri> <LocalPath> or <S3Uri> <S3Uri>
# Download file(s)
aws s3 sync s3://<bucket_name>/<file_or_directory_path> .
# Upload file(s)
aws s3 sync . s3://<bucket_name>/<file_or_directory_path>
# Sync two buckets
aws s3 sync s3://<1st_s3_path> s3://<2nd_s3_path>
What I usually do is mount the s3 bucket (with s3fs) in a linux machine and zip the files I need into one, then I just download that file from any pc/browser.
# mount bucket in file system
/usr/bin/s3fs s3-bucket -o use_cache=/tmp -o allow_other -o uid=1000 -o mp_umask=002 -o multireq_max=5 /mnt/local-s3-bucket-mount
# zip files into one
cd /mnt/local-s3-bucket-mount
zip all-processed-files.zip *.jpg
import os
import boto3
import json
s3 = boto3.resource('s3', aws_access_key_id="AKIAxxxxxxxxxxxxJWB",
aws_secret_access_key="LV0+vsaxxxxxxxxxxxxxxxxxxxxxry0/LjxZkN")
my_bucket = s3.Bucket('s3testing')
# download file into current directory
for s3_object in my_bucket.objects.all():
# Need to split s3_object.key into path and file name, else it will give error file not found.
path, filename = os.path.split(s3_object.key)
my_bucket.download_file(s3_object.key, filename)
I'm very new at this and need some help; I'm sure I'm not doing something right. I have a Synology NAS that has a cool options to sync files to Google cloud storage. This is a great way to get my backups off site
I have my backups syncing to a cold line storage bucket. Now that my files are syncing I'm looking to document the process if I need to retrieve them.
I want to download a whole folder and all of the files inside it to a windows server. I installed the gsutil and trying to run this command.
gsutil -m cp -R dir gs://bhp_backup_sync/backup/foldername
but after I run this I get the following exception.
CommandException: No URLs matched: dir
CommandException: 1 file/object could not be transferred.
NOOB here what am I missing?
i am working with s3 bucket. i need to copy an image from my amazon server to s3 bucket. any idea how can i do it? i saw some sample codes but i dont know how to use it.
if (S3::copyObject($sourceBucket, $sourceFile, $destinationBucket, $destinationFile, S3::ACL_PRIVATE)) {
echo "Copied file";
} else {
echo "Failed to copy file";
}
it seems that this code is used only to bucket but not for the server?
thanks for help.
Copy between S3 Buckets
AWS released a command line interface for copying between buckets.
http://aws.amazon.com/cli/
$ aws s3 sync s3://mybucket-src s3://mybucket-target --exclude *.tmp
..
This will copy from one target bucket to another bucket.
I have no tested this, but I believe that this will operate in series, by downloading the files to your system and then uploading to the bucket.
See the documentation here : S3 CLI Documentation
I've used s3cmd for several years, and it's been very reliable. If you're using Ubuntu it's available with:
apt-get install s3cmd
You can also use one of the SDKs to develop your own tool.