how does AWS CloudFormation support configuration file with conditional logic - conditional-statements

I am looking at AWS CloudFormation template
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html
I wonder how does CloudFormation translate this conditional logic section into conditional code behind the scene? If so, did they use any some open source framework to do this?
Thanks

Related

How to convert Cloudwatch Dashboard Source Json to CDK

So we've created a dashboard in CloudWatch and we want it initialized by CDK every startup across all our environments. We noticed there's a view/edit source that you can copy paste a json in and we wondered is there a way to convert the View/Edit Source to CDK objects or widgets so it would be easier to maintain?
You can do this using the low-level L1 CfnDashboard construct. L1 constructs map 1 to 1 to CloudFormation resources, and since CloudFormation supports creating a dashboard from JSON, this can be done in CDK.
Simply provide your JSON string to the dashboardBody prop of CfnDashboard.
Keep in mind, though, that all the metric names and regions will be hardcoded, so if you need them to change based on the environment, you'll need to do that yourself.
If your goal is ease of maintainability, I would strongly suggest converting your dashboard to CDK code. This should be straightforward to do and will give you readability and ease of modification.
Reference: https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_cloudwatch.CfnDashboard.html#dashboardbody

Reading yaml properties file from S3

I have a yaml properties file stored in a S3 bucket. In Mule4 I can read this file using S3 connector. I need to use properties defined in this file (for dynamic values reading and using it in Mule4) in DB connectors. I am not able to create properties from this file such that I can use them as ${dbUser} in mule configuration or flow as an example. Any guidance on how can I accomplish this?
You will not be able to use the S3 connector to do that. The connector can read the file in an operation at execution time, but properties placeholders, like ${dbUser} have to be defined earlier, at deployment time.
You might be able to to read the value into a variable (for example: #[vars.dbUser]) and use the variable in the database connector configuration. That is called a dynamic configuration, because it is evaluated dynamically at execution time.

AWS Glue check file contents correctness

I have a project in AWS to insert data from some files, which will be in S3, to Redshift. The point is that the ETL has to be scheduled each day to find new files in S3 and then check if those files are correct. However, this has to be done with custom code as the files can have different formats depending of their kind, provider, etc.
I see that AWS Glue allows to schedule, crawl and do the ETL. However I'm lost at how to one can create its own code for the ETL and parse the files to check the correctness before ending up doing the copy instruction from S3 to Redshift. Do you know if that can be done and how?
Another issue is that if the correctness is OK then, the system should upload the data from S3 to a web via some API. But if it's not the file should be left into an ftp email. Here again, do you know if that can be done as well with the AWS Glue and how?
many thanks!
You can write your glue/spark code, upload it to s3 and create a glue job referring to this script/library. Anything you want to write in python can be done in glue. its just a wrapper around spark which in turn uses python....

How to code/implement AWS Batch?

I am pretty new to AWS Batch. I have prior experience of batch implementation.
I gone through this link and got how to configure a batch on AWS.
I have to implement simple batch which will read incoming pipe separated file, read out data from that file, perform some transformation on that data and then save each line as a separate file in S3.
But I didn't find any document or example where I could see the implementation part. All or atleast most document talks only about AWS batch configuration.
Any idea from coding/implementation part? I would be using Java for implementation.
This might help you. The code is in python though!
AWSLABS/AWS-BATCH-GENOMICS
AWS Batch is running your Docker containers as jobs, so the implementation is not limited to the languages.
For Java, you could have some jars copied to your Dockerfile, and providing a entry point or CMD to start your code when the job is started in AWS Batch.

Read messages from SQS into Dataflow

I've got a bunch of data being generated in AWS S3, with PUT notifications being sent to SQS whenever a new file arrives in S3. I'd like to load the contents of these files into BigQuery, so I'm working on setting up a simple ETL in Google Dataflow. However, I can't figure out how to integrate Dataflow with any service that it doesn't already support out of the box (Pubsub, Google Cloud Storage, etc.).
The GDF docs say:
In the initial release of Cloud Dataflow, extensibility for Read and Write transforms has not been implemented.
I think I can confirm this, as I tried to write a Read transform and wasn't able to figure out how to make it work (I tried to base an SqsIO class on the provided PubsubIO class).
So I've been looking at writing a custom source for Dataflow, but can't wrap my head around how to adapt a Source to polling SQS for changes. It doesn't really seem like the right abstraction anyway, but I wouldn't really care if I could get it working.
Additionally, it looks like I'd have to do some work to download the S3 files (I tried creating a Reader for that as well with no luck b/c of the above mentioned reason).
Basically, I'm stuck. Any suggestions for integrating SQS and S3 with Dataflow would be very appreciated.
The Dataflow Java SDK now includes an API for defining custom unbounded sources:
https://github.com/GoogleCloudPlatform/DataflowJavaSDK/blob/master/sdk/src/main/java/com/google/cloud/dataflow/sdk/io/UnboundedSource.java
This can be used to implement a custom SQS Source.