AWS AmazonConfig listDiscoveredResources does not list resources for some regions - aws-java-sdk

I am syncing AWS resources using AWS java SDK method listDiscoveredResources.
This method list resources for some regions only.
Anyone who knows the reason for this?

Related

Providing credentials to the AWS CLI in ECS/Fargate

I would like to create an ECS task with Fargate, and have that upload a file to S3 using the AWS CLI (among other things). I know that it's possible to create task roles, which can provide the task with permissions on AWS services/resources. Similarly, in OpsWorks, the AWS SDK is able to query instance metadata to obtain temporary credentials for its instance profile. I also found these docs suggesting that something similar is possible with the AWS CLI on EC2 instances.
Is there an equivalent for Fargateā€”i.e., can the AWS CLI, running in a Fargate container, query the metadata service for temporary credentials? If not, what's a good way to authenticate so that I can upload a file to S3? Should I just create a user for this task and pass in AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables?
(I know it's possible to have an ECS task backed by EC2, but this task is short-lived and run maybe monthly; it seemed a good fit for Fargate.)
"I know that it's possible to create task roles, which can provide the
task with permissions on AWS services/resources."
"Is there an equivalent for Fargate"
You already know the answer. The ECS task role isn't specific to EC2 deployments, it works with Fargate deployments as well.
You can get the task metadata, including IAM access keys, through the ECS metadata service. But you don't need to worry about that, because the AWS CLI, and any AWS SDK, will automatically pull that information when it is running inside an ECS task.

Syncing buckets between two S3 Storage Providers

I am currently using RIAK CS as an S3 Provider but I want to change to Scality S3. Therefore, I need to migrate the existing data from RIAK to Scality. Is there a quick an easy way of syncing buckets between the two different storage providers? I have got two docker containers running containing the docker images for the two.
One way of doing it would be to simply download the contents of the buckets to a local folder and then upload to Scality using s3cmd or a similar tool. However, I was hoping there was a direct route between the buckets.
Any ideas?
There would not be a "direct route between the buckets".
While the Amazon S3 CopyObject command can copy objects between different Amazon S3 buckets (even if they are in different regions), it will not work with a non-Amazon endpoint.
Your only hope is if Riak/Scality have somehow built-in connectivity with each other.

Connect to AWS S3 without API

I've looked everywhere on the Interweb but couldn't find a satisfying answer...
Does anybody know what "protocol" the AWS S3 speaks?
Our idea is to write a Function for a PLC (no chance to use the provided API) to communicate directly with AWS S3.
For Example PLC to "AWS IoT" works in MQTT/HTTP - how can I skip "AWS IoT"?
I know there is the possibility to put an IoT device inbetween - but we are evaluating our possibilities right now.
Thank you in advance
All of the AWS services have a documented REST API - the S3 one is here. In addition, all of their libraries are open source so you could likely get some ideas from them too.

Modify S3 API to access Ceph instead of Amazon S3 storage

I have a JAR file - jets3t-0.7.4.jar, by which I can access Amazon's S3 storage. I need to modify its source code so that it accesses Ceph object storage instead. I know it can done by modfying the S3 API, but do not know how. Does anyone know how to do this? I googled for information, but didn't really find anything informative. Any help is appreciated. Thanks!
Just let the S3 endpoint resolve to your ceph radosgw (ceph's S3 API interface.), via /etc/resolv.conf, dnsmasq, jets3t's config....many ways available.
Many object storage claim that they are S3 compatible, but in fact they are not. I think ceph is one of them. If what you want is fully compatible, google cloudian.

InvalidAMIID with Amazon AWS EC2 API

I'm querying the AWS API and trying to launch an instance with the AMI ID that is located in us-west-1a. This is failing with an error of InvalidAMIID. Using the same API keys I'm able to launch an AMI in us-east-1b. Has anyone have experience with this? I'm positive I'm doing something wrong. Not sure if this is the right place to ask.
The AMI's are different from region to region -- you can't call an AMI in us-east-* in us-west-*. If this is a custom AMI, you'll need to move it over to the new region, or just find the corresponding AMI if it's a public AMI.
I've been able to narrow this down to one of two things:
The AWS REST API requires both region and availability zone to be specified. It isn't possible to just specify the availability zone.
OR
The above problem lies actually in the 'aws' ruby gem