Override AWS SDK Endpoint for AWS Step Functions Local - testing

I want to test my AWS Step Function state machine with AWS Step Functions Local (https://docs.aws.amazon.com/step-functions/latest/dg/sfn-local.html), where I mock specific AWS Service operations via a faked HTTP Server as endpoint.
AWS Step Functions Local in general works just fine; I can create & start the state machine successfully.
But I use some (Service-)Tasks that utilise the generic AWS SDK Client (e.g. CodeCommit) rather then "optimised" Tasks (e.g. DynamoDB).
The endpoints for the latter can be overridden, e.g. by Environment Variables for docker (see https://docs.aws.amazon.com/step-functions/latest/dg/sfn-local-config-options.html).
But I see no option to override the "generic" AWS SDK endpoint, thus AWS Step Functions Local uses the actual AWS Endpoints (https://{service}.{region}.amazonaws.com), which is not what I want.
Does anyone know if this can be achieved in some way?
Or, if not, maybe this feature can be requested somehow?
Cheers!

Related

AWS S3 Java SDK V1 expose JMX metrics

I want to view AmazonS3 JMX metrics. Specifically the number of retries performed by the client. The only problem is that according to this article it seems that it supports only Cloudwatch, and I want to see it momentarily using jconsole.
Enabling without credential file, doesn't seem to work.
Is there some kind of workaround?

Spinnaker AWS Provider not allowing create cluster

Deployed Spinnaker in AWS to run a test in the same account. However unable to configure server groups. If I click create the task is queued with the account configured via hal on the CLI. Anyway to troubleshoot this, the logs are looking light.
Storage backend needs to be configured correctly.
https://www.spinnaker.io/setup/install/storage/

How To Deploy AWS CloudFormation Template Across Region?

I was trying to deploy AWS services using cloudFormation. I was successful with deploy for particular region. Now i wanted to deploy some of AWS Services in different region for example i have EC2, Lambda and S3 for deployment and i have to deploy EC2 and lambda on us-west region and S3 on EU-East and US-WEST region.
Can this possible with one template.
I went thought AWS Stack Set but i think this will deploy to all AWS Service to all mention region. I wanted to have some AWS Services to some region and some with only one specific region.
Assuming you're using the CLI, your best option is to have multiple profiles configured and then perform two deployments with different profiles for each deployment. Secondarily, you can use parameters as input to your template and use a conditional statement to deploy different resources based on the region you're targeting. Relevant links -
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
https://forums.aws.amazon.com/thread.jspa?threadID=162459

Kubernetes Custom Volume Plugin with Dynamic Provisioning

I have a proprietary file-system and I would like to use it for providing file storage to my K8S pods. I am currently running K8S v1.5.1, but open upgrade to 1.6 if need be.
I would like to make use of Dynamic Provisioning so that the volumes are created on need basis. I went through the official documentation on kubernetes.io and this is what I have understood so far:
I need to write a Kubernetes Custom volume-plugin for my proprietary
file-system.
I need to create a StorageClass which makes use of a
provisoner that provisions volumes from my proprietary filesystem
I then create a PVC that refers to my StorageClass
I then create my Pods referring to my storage class by name.
What I am not able to make out is:
Is Provisoner referred by Storage Class and K8S Volume Plugin one and the same? If they are different, how?
There is mention of External Provisoner in K8S documentation. Does this mean I can write the K8S Volume Plugin for my filesystem out-of-tree (outside K8S code)?
My filesystem provides REST APIs to create filesystem volumes. Can I invoke them in my provisoner/volume plugin?
If I write an out-of-tree plugin, how do I load it in my K8S cluster so that it can be used to provision volumes using the Storage Class?
Appreciate any help in answering any or all of the above.
Thanks!
Is Provisoner referred by Storage Class and K8S Volume Plugin one and the same? If they are different, how?
It should be same if you want to provision the storage using that plugin.
There is mention of External Provisoner in K8S documentation. Does this mean I can write the K8S Volume Plugin for my filesystem out-of-tree (outside K8S code)?
Yes, thats correct.
My filesystem provides REST APIs to create filesystem volumes. Can I invoke them in my provisoner/volume plugin?
Yes, as long as the client is part of the provisioner code.
If I write an out-of-tree plugin, how do I load it in my K8S cluster so that it can be used to provision volumes using the Storage Class?
It can run as a container or you can invoke it by a binary execution model.

Retrieve application config from secure location during task start

I want to make sure I'm not storing sensitive keys and credentials in source or in docker images. Specifically I'd like to store my MySQL RDS application credentials and copy them when the container/task starts. The documentation provides an example of retrieving the ecs.config file from s3 and I'd like to do something similar.
I'm using the Amazon ECS optimized AMI with an auto scaling group that registers with my ECS cluster. I'm using the ghost docker image without any customization. Is there a way to configure what I'm trying to do?
You can define a volume on the host and map it to the container with Read only privileges.
Please refer to the following documentation for configuring ECS volume for an ECS task.
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
Even though the container does not have the config at build time, it will read the configs as if they are available in its own file system.
There are many ways to secure the config on the host OS.
In my past projects, I have achieved the same by disabling ssh into the host and injecting the config at boot-up using cloud-init.