How to set S3v2 api to Minio (AWS S3 local) in a docker-compose - amazon-s3

My origin docker-compose is :
s3:
image: minio/minio
command: server /data --console-address ":9001"
ports:
- 9000:9000
- 9001:9001
networks:
- lambda-local
is it possible to set S3v2 by default when docker starts?
EDIT
I use some Python AWS Lambda and AWS S3 (S3v2) in production. My code is ready for use S3v2 only.
In my computer (for develop unit tests), I want juste switch S3 by Minio started by docker-compose.
I do not want change my Lambda code. I need change Minio local install (docker-compose).
This change of S3 by Minio must be transparent for my application.

So the MinIO Server supports both S3v4 and S3v2 without any additional configuration required. As Prakash noted, it's typically an SDK setting.
You can test this yourself using the mc commandline tool:
mc alias set --api "S3v4" myminiov4 http://localhost:9000 minioadmin minioadmin
mc alias set --api "S3v2" myminiov2 http://localhost:9000 minioadmin minioadmin
echo "foo" > foo.txt
echo "bar" > bar.txt
mc mb myminiov4/testv4
mc mb myminiov2/testv2
mc cp foo myminiov4/testv4/foo
mc cp bar myminiov2/testv2/bar
You can read either file using either alias - one which is using Signature v4 and another using Signature v2.
You should defer to the documentation for your preferred SDK on how to set this value. It's worth noting that Sv2 is deprecated - newer SDK versions might not even support it. So you'll first need to confirm that the version of your preferred SDK supports Sv2 at all, and then enable it when connecting to MinIO. I did not, for example, find an obvious way of setting it in the AWS Java SDK docs (though maybe I just missed it).
MinIO's GO SDK appears to support it via an override, but I haven't tested it myself yet.
If you have a specific application which requires Sv2 that isn't working, you'll need to provide more detail before getting further guidance. From a MinIO perspective, I'm not aware of any specific restrictions on either signature version.
Disclaimer: I work for MinIO.

Related

Set mfsymlinks when mounting Azure File volume to ACI instance

Is there a way to specify the mfsymlinks option when mounting an Azure Files share to an ACI container instance?
As shown on learn.microsoft.com symlinks can be supported in Azure Files when mounted in Linux with this mfsymlinks option enabling Minshall+French symlinks.
I would like to use an Azure Files share mounted to an Azure Container Instance but I need to be able to use symlinks in the mounted file system, but I cannot find a way to specify this. Does anyone know of a way to do this?
Unfortunately, as far as I know, when you create the container and mount the Azure File Share through the CLI command az container create with parameters such as
--azure-file-volume-account-key
--azure-file-volume-account-name
--azure-file-volume-mount-path
--azure-file-volume-share-name
You cannot set the symlinks as you want and there also no parameter for you to set it.
In addition, if you take a look at the Template for Azure Container Instance, then you can find that there no property shows the setting about symlinks. In my opinion, it means you cannot set the symlinks for Azure Container Instance as you want. Hope this will help you.
As a workaround that suits my use case, once the file structure, including symlinks, has been created on the container's local FS, I tar up the files onto the Azure Files share: tar -cpzf /mnt/letsencrypt/etc.tar.gz -C / etc/letsencrypt/ Then when the container runs again, it extracts from the tarball, preserving the symlinks: tar -xzf /mnt/letsencrypt/etc.tar.gz -C /
I'll leave this open for now to see if ACI comes to support the option natively.
Update from Azure docs (azure-files-volume#mount-options):
apiVersion: v1
kind: PersistentVolume
metadata:
name: azurefile
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- nobrl

Tensorflow serving from s3 path is not working - Could not find base path s3://bucketname

Loading model from s3 is not working for me. As per #615 I compiled the server using:
bazel build -s -c opt --define with_s3_support=true //tensorflow_serving/model_servers:tensorflow_model_server
and now when I run it using
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --model_base_path=s3://bucketname/
I'm getting
FileSystemStoragePathSource encountered a file-system access error: Could not find base path s3://bucketname/ for servable default
Any tips on how to fix that?
s3://bucketname/ isn't resolvable unless you have the AWS SDK installed on that machine.
A much better approach would be to use the URL of the model on S3. If you're going to do that, you have to either make the bucket public, in which everyone will be able to access it, or you have to create a bucket policy which allows access from the server's IP.
If you're hosting your server on AWS, you can also launch it with an IAM role and give it S3FullAccess. This is best for any sort of production environment because you don't have to store API Keys in your source code.

Difference between s3cmd, boto and AWS CLI

I am thinking about redeploying my static website to Amazon S3. I need to automate the deployment so I was looking for an API for such tasks. I'm a bit confused over the different options.
Question: What is the difference between s3cmd, the Python library boto and AWS CLI?
s3cmd and AWS CLI are both command line tools. They're well suited if you want to script your deployment through shell scripting (e.g. bash).
AWS CLI gives you simple file-copying abilities through the "s3" command, which should be enough to deploy a static website to an S3 bucket. It also has some small advantages such as being pre-installed on Amazon Linux, if that was where you were working from (it's also easily installable through pip).
One AWS CLI command that may be appropriate to sync a local directory to an S3 bucket:
$ aws s3 sync . s3://mybucket
Full documentation on this command:
http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
Edit: As mentioned by #simon-buchan in a comment, the aws s3api command gives you access to the complete S3 API, but its interface is more "raw".
s3cmd supports everything AWS CLI does, plus adds some more extended functionality on top, although I'm not sure you would require any of it for your purposes. You can see all its commands here:
http://s3tools.org/usage
Installation of s3cmd may be a bit more involved because it doesn't seem to be packages for it in any distros main repos.
boto is a Python library, and in fact the official AWS Python SDK. The AWS CLI, also being written in Python, actually uses part of the boto library (botocore). It would be well suited only if you were writing your deployment scripts in Python. There are official SDKs for other popular languages (Java, PHP, etc.) should you prefer:
http://aws.amazon.com/tools/
The rawest form of access to S3 is through AWS's REST API. Everything else is built upon it at some point. If you feel adventurous, here's the S3 REST API documentation:
http://docs.aws.amazon.com/AmazonS3/latest/API/APIRest.html

(EC2) Launch Windows instance programmatically via command line

I'd like to launch a Windows 2008 (64bits, base install) instance programmatically, kinda like clicking on the Launch Instance link & following the "Create a New Instance" wizard.
I read about this command ec2-run-instances, I tried running it on putty using this syntax:
/opt/aws/bin/ec2-run-instances ami_id ami-e5784391 -n 1
--availability-zone eu-west-1a --region eu-west-1 --instance-type m1.small --private-key /full/path/MyPrivateKey.pem --group MyRDP
but it always complain that:
Required option '-C, --cert CERT' missing (-h for usage)
According to the documentation, this option isn't required!!
Can someone tell me what's wrong anyway? I'm just trying to programmatically launch a fresh Windows install, run some tests on the clouds & shut it down after that.
The error message is correct (just try adding --cert ;) - to what documentation are you referring here?
The requirement is clearly outlined in the Microsoft Windows Guide for Amazon EC2, specifically in Task 4: Set the EC2_PRIVATE_KEY and EC2_CERT Environment Variables:
The command line tools need access to an X.509 certificate and a
corresponding private key that are associated with your account. [...]
You can either specify your credentials with the --private-key and
--cert parameters every time you issue a command or you can create environment variables that point to the credential files on your local
system. If the environment variables are properly configured, you can
omit the parameters when you issue a command.
[emphasis mine]
Maybe the option of using environment variables has been misleading somehow somewhere?
Alternative
Please note that you can ease and speed up working with EC2 considerably by using alternate scripting environments covering the same ground, in particular the excellent boto, which is a Python package that provides interfaces to Amazon Web Services.
Boto uses the nowadays more common authentication scheme based on access keys only rather than X.509 certificates (e.g. an AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY pair), which furthermore can (and should) be managed via AWS Identity and Access Management (IAM) to avoid the risk of exposing your main AWS account credentials in the first place. See my answer to How to download an EC2 X.509 certificate with an IAM User account? for more details on this.
Good luck!

Move Amazon EC2 AMIs between regions via web-interface?

Any easy way to move and custom AMI image between regions? (tokyo -> singapore)
I know you can mess up with API and S3 to get it done, but there there any easier way to do it?
As of December, 2012, Amazon now supports migrating an AMI to another region through the UI tool (Amazon Management Console). See their documentation here
So, how I've done it is..
From the AMI find out the Snapshot-ID and how it is attached (e.g. /dev/sda1)
Select the Snapshot, click "Copy", set Destination region and make the copy (takes a while!)
Select the new Snapshot, click "Create Image"
Architecture: (choose 32 or 64 bit)
Name/Description: (give it one)
Kernel ID: when migrating a Linux AMI, if you choose "default" it may fail. What worked for me was to go to the Amazon Kernels listing here to find the kernels Amazon supports, then specify it when creating the image)
Root Device Name: /dev/sda1
Click "Yes, Create"
4.Launch an instance from the new AMI and test that you can connect.
You can do it using Eric's post:
http://alestic.com/2010/10/ec2-ami-copy
The following assumes your AWS Console utilities are installed in /opt/aws/bin/, JAVA_HOME=/usr and you are running i386 architecture, otherwise replace with x86_64.
1) Run a live snapshot, where you believe your image can fit in 1.5GB and you have that to spare in /mnt (check running df)
/opt/aws/bin/ec2-bundle-vol -d /mnt -k /home/ec2-user/.ec2/pk-XXX.pem -c /home/ec2-user/.ec2/cert-XXX.pem -u 123456789 -r i386 -s 1500
2) Upload to current region's S3 bucket
/opt/aws/bin/ec2-upload-bundle -b S3_BUCKET -m /mnt/image.manifest.xml -a abcxyz -s SUPERSECRET
3) Transfer the image to EU S3 bucket
/opt/aws/bin/ec2-migrate-image -K /home/ec2-user/.ec2/pk-XXX.pem -C /home/ec2-user/.ec2/cert-XXX.pem -o abcxyz -w SUPERSECRET --bucket S3_BUCKET_US --destination-bucket S3_BUCKET_EU --manifest image.manifest.xml --location EU
4) Register your AMI so you can fire up the instance in Ireland
/opt/aws/bin/ec2-register –K /home/ec2-user/.ec2/pk-XXX.pem –C /home/ec2-user/.ec2/cert-XXX.pem http://s3.amazonaws.com:80/S3_BUCKET/image.manifest.xml --region eu-west-1 -name DEVICENAME -a i386 --kernel aki-xxx
There are API tools for this. http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-MigrateImage.html
I think that is now outdated by ec2-bundle-vol and ec2-migrate-image, BTW you can also take a look at this Perl script by Lincoln D. Stein:
http://search.cpan.org/~lds/VM-EC2/bin/migrate-ebs-image.pl
Usage:
$ migrate-ebs-image.pl --from us-east-1 --to ap-southeast-1 ami-123456
Amazon have just announced support for this functionality in this blog post. Note that the answer from dmohr relates to copying EBSs, not AMIs.
In case the blog post is unavailable, quoting the relevant parts:
To use AMI Copy, simply select the AMI to be copied from within the
AWS Management Console, choose the destination region, and start the
copy. AMI Copy can also be accessed via the EC2 Command Line
Interface or EC2 API as described in the EC2 User’s Guide. Once the
copy is complete, the new AMI can be used to launch new EC2 instances
in the destination region.
AWS now supports the copy of an EBS snapshot to another region via UI/CLI/API. You can copy the snapshot and then make an AMI from it. Direct AMI copy is coming - from AWS:
"We also plan to launch Amazon Machine Image (AMI) Copy as a follow-up
to this feature, which will enable you to copy both public and
custom-created AMIs across regions.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-copy-snapshot.html?ref_=pe_2170_27415460
Ylastic allows you to move EBS backed linux images between regions.
Its $25 or $50 per month but it looks like you can evaluate it for a week.
I just did this using a script on CloudyScripts, worked fantastically: https://cloudyscripts.com/tool/show/5 (and it's free).
As of 2017, it's pretty simple.. just follow the screenshots:
I'll add Scalr to the list of tools you can use (Disclaimer: I work there). Within Scalr, you can create your own AMI (we call them roles). Once your role is ready, you just have to choose where you want to deploy it (so in any regions).
Scalr is open-source released under the Apache 2 license: you can to download it and install it yourself. Otherwise, it is also available through a hosted version including support. Alternatives to Scalr includes RightScale and enStratus.