How To Deploy AWS CloudFormation Template Across Region? - amazon-s3

I was trying to deploy AWS services using cloudFormation. I was successful with deploy for particular region. Now i wanted to deploy some of AWS Services in different region for example i have EC2, Lambda and S3 for deployment and i have to deploy EC2 and lambda on us-west region and S3 on EU-East and US-WEST region.
Can this possible with one template.
I went thought AWS Stack Set but i think this will deploy to all AWS Service to all mention region. I wanted to have some AWS Services to some region and some with only one specific region.

Assuming you're using the CLI, your best option is to have multiple profiles configured and then perform two deployments with different profiles for each deployment. Secondarily, you can use parameters as input to your template and use a conditional statement to deploy different resources based on the region you're targeting. Relevant links -
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
https://forums.aws.amazon.com/thread.jspa?threadID=162459

Related

Override AWS SDK Endpoint for AWS Step Functions Local

I want to test my AWS Step Function state machine with AWS Step Functions Local (https://docs.aws.amazon.com/step-functions/latest/dg/sfn-local.html), where I mock specific AWS Service operations via a faked HTTP Server as endpoint.
AWS Step Functions Local in general works just fine; I can create & start the state machine successfully.
But I use some (Service-)Tasks that utilise the generic AWS SDK Client (e.g. CodeCommit) rather then "optimised" Tasks (e.g. DynamoDB).
The endpoints for the latter can be overridden, e.g. by Environment Variables for docker (see https://docs.aws.amazon.com/step-functions/latest/dg/sfn-local-config-options.html).
But I see no option to override the "generic" AWS SDK endpoint, thus AWS Step Functions Local uses the actual AWS Endpoints (https://{service}.{region}.amazonaws.com), which is not what I want.
Does anyone know if this can be achieved in some way?
Or, if not, maybe this feature can be requested somehow?
Cheers!

Spinnaker for CD

I was planning to use Jenkins(CI) ----> Spinnaker(CD) integration for AWS EKS.
Does Spinnaker support multi-cluster deployments?
For example:
I will have 4 clusters in different accounts
and I want to have 1 Spinnaker deployed to one of the clusters and manage other 3 as well.
is it possible to do?
Yes is possible. The suggested way is to have Spinnaker Running in an AWS Account called Spinnaker or CD and in a specific namespace called spin.
A great guide to follow for Spinnaker in EKS is Continuous Delivery using Spinnaker on Amazon EKS

Spinnaker AWS Provider not allowing create cluster

Deployed Spinnaker in AWS to run a test in the same account. However unable to configure server groups. If I click create the task is queued with the account configured via hal on the CLI. Anyway to troubleshoot this, the logs are looking light.
Storage backend needs to be configured correctly.
https://www.spinnaker.io/setup/install/storage/

Can spinnaker use local storage such as mysql database?

I want to deploy spinnaker for my team. But I encounter a problem. The document of spinnaker said:
Before you can deploy Spinnaker, you must configure it to use one of the supported storage types.
Azure Storage
Google Cloud Storage
Redis
S3
Can spinnaker use local storage such as mysql database?
The Spinnaker microservice responsible for persisting your pipeline configs and application metadata, front50, has support for the storage systems you listed. One could add support for additional systems like mysql by extending front50, but that support does not exist today.
Some folks have had success configuring front50 to use s3 and pointing it at a minio installation.

Monitoring Amazon S3 logs with Splunk?

We have a large extended network of users that we track using badges. The total traffic is in the neighborhood of 60 Million impressions a month. We are currently considering switching from a fairly slow, database-based logging solution (custom-built on PHP—messy...) to a simple log-based alternative that relies on Amazon S3 logs and Splunk.
After using Splunk for some other analyisis tasks, I really like it. But it's not clear how to set up a source like S3 with the system. It seems that remote sources require the Universal Forwarder installed, which is not an option there.
Any ideas on this?
Very late answer but I was looking for the same thing and found a Splunk app that does what you want, http://apps.splunk.com/app/1137/. I have yet not tried it though.
I would suggest logging j-son preprocessed data to a documentdb database. For example, using azure queues or simmilar service bus messaging technologies that fit your scenario in combination with azure documentdb.
So I'll keep your database based approach and modify it to be a schemaless easy to scale document based DB.
I use http://www.insight4storage.com/ from AWS Marketplace to track my AWS S3 storage usage totals by prefix, bucket or storage class over time; plus it shows me the previous versions storage by prefix and per bucket. It has a setting to save the S3 data as splunk format logs that might work for your use case, in addition to its UI and webservice API.
You use Splunk Add-On for AWS.
This is what I understand,
Create a Splunk instance. Use the website version or the on-premise
AMI of splunk to create an EC2 where splunk is running.
Install Splunk Add-On for AWS application on the EC2.
Based on the input logs type (e.g. Cloudtrail logs, Config logs, generic logs, etc) configure the Add-On and supply AWS account id or IAM Role, etc parameters.
The Add-On will automatically ping AWS S3 source and fetch the latest logs after specified amount of time (default to 30 seconds).
For generic use case (like ours), you can try and configure Generic S3 input for Splunk