ECS scheduled task containerOverrides for entryPoint not working - amazon-cloudwatch

I'm creating a scheduled ECS task in Terraform. When I try to override the container definition for the entryPoint, the resulting task does not use the overridden entryPoint. However, if I try to override the command, it works fine (adds a new command in addition to existing entry point). I cannot find anything in the docs that lead me to believe that there is no support for entryPoint overriding but that may be the case?
Below is the code for the Cloudwatch event target in terraform
resource "aws_cloudwatch_event_target" "ecs_task" {
target_id = "run-${var.task_name}-scheduled"
arn = "${var.cluster_arn}"
rule = "${aws_cloudwatch_event_rule.ecs_task_event_rule.name}"
role_arn = "${aws_iam_role.ecs_event.arn}"
ecs_target = {
launch_type = "${var.launch_type}"
network_configuration = {
subnets = ["${var.subnet_ids}"]
security_groups = ["${var.security_group_ids}"]
}
task_count = 1
task_definition_arn = "${var.task_arn}"
}
input = <<DOC
{
"containerOverrides": [
{
"name": "${var.task_name}",
"entryPoint": ${jsonencode(var.command_overrides)}
}
]
}
DOC
}
This creates a new scheduled task on the AWS console, where the input field is the following:
{
"containerOverrides": [
{
"name": "my-container-name",
"entryPoint": [
"sh",
"/my_script.sh"
]
}
]
}
However tasks launched by this rule do not have the entry point override and use the entrypoint defined in the original task definition.
TLDR: How can I override the entrypoint for a scheduled task?

As of today, only a certain number of fields can be overridden as the scheduled task ultimately uses the run-task API. These fields are the following:
command
environment
taskRoleArn
cpu
memory
memoryReservation
resourceRequirements
Container definitions for other fields are not supported, such as entryPoint, portMappings, and logConfiguration.
The solution is to use command instead of entryPoint in the original task definition, as command can be overridden but entryPoint cannot.

Related

How to display in a project that is called from another pipeline project in jenkins text in "Edit Build Information"

Hello: I have a pipeline project at Jenkins called "ProjectLaunch" that launches other projects automatically one day a week. I would like to show in "Edit Build Information" a text that says something like "ProjectCalled full launched", in the project that is called. With the pipeline I have in "ProjectLaunch" I can only get that text in its "Edit Execution Information" but not in the projects called.
My pipeline:
pipeline {
agent any
stages {
stage('Proyect called') {
steps{
build job: 'Proyect called',string(name: "DESCRIPTION", value: "ProjectCalled full launched") ]
}
}
}
}
In the project called in the configuration I have an Execute system Groovy script, it is this:
import hudson.model.*
def build = manager.build
def params = build.action(hudson.model.ParametersAction).getParameters()
def description = "${params.DESCRIPTION}"
echo "Setting build description to: ${description}"
build.createSummary("Build description").appendText(description, "html")
But when I launch it I get this error:
Caught: groovy.lang.MissingPropertyException: No such property: manager for class: hudson1130775177255181016
groovy.lang.MissingPropertyException: No such property: manager for class: hudson1130775177255181016
Any ideas?
Thank you.
As far as I know, you can't set the build description of a downstream job from an upstream Job. Hence in your Downstream Job, you can have some logic to set the description based on the trigger.
pipeline {
agent any
stages {
stage('Hello') {
steps {
script {
echo 'Your second Job'
if(currentBuild.getBuildCauses()[0]._class == "org.jenkinsci.plugins.workflow.support.steps.build.BuildUpstreamCause") {
echo "This is triggered by upstream Job"
currentBuild.description = "ProjectCalled full launched."
}
}
}
}
}
}
I've done the following because with Groovy I couldn't do it. I installed "Build Name and Description Setter" plugin. In the "ProjectLaunch" pipeline project, the pipeline would be more or less like this:
Pipeline {
agent any
parameters{
string(name: "DESCRIPTION", defaultValue: "ProjectCalled full launched", description: "")
}
stages {
stage('ProjectCalled') {
steps{
build job: 'ProjectCalled', parameters: [string(name: "DESCRIPTION", value: params.DESCRIPTION)]
}
}
}
}
And in the ProjectCalled in settings, add a parameter, I select the plugin option Changes build description, and I write ${DESCRIPTION}.
Changes build description
Ah!, I have also created string parameter named DESCRIPTION and without value.
String parameter

AWS credentials missing when running userdata in a new EC2

Using terraform scripts, I create a new EC2, add policy to access an S3 bucket, and supply a userdata script that runs aws s3 cp s3://bucket-name/file-name . to copy a file from that S3 bucket, among other commands.
In /var/log/cloud-init-output.log I see fatal error: Unable to locate credentials, presumably caused by executing aws s3 cp ... line. When I execute the same command manually on the EC2 after it's been created, it works fine (which means the EC2 policy for bucket access is correct).
Any ideas why the aws s3 cp command doesn't work during userdata execution but works when the EC2 is already created? Could it be that the S3 access policy is only applied to the EC2 after the EC2 has been fully created (and after userdata has been run)? What should be the correct workaround?
data "aws_iam_policy_document" "ec2_assume_role" {
statement {
effect = "Allow"
actions = [
"sts:AssumeRole",
]
principals {
type = "Service"
identifiers = [
"ec2.amazonaws.com",
]
}
}
}
resource "aws_iam_role" "broker" {
name = "${var.env}-broker-role"
assume_role_policy = data.aws_iam_policy_document.ec2_assume_role.json
force_detach_policies = true
}
resource "aws_iam_instance_profile" "broker_instance_profile" {
name = "${var.env}-broker-instance-profile"
role = aws_iam_role.broker.name
}
resource "aws_iam_role_policy" "rabbitmq_ec2_access_to_s3_distro" {
name = "${env}-rabbitmq_ec2_access_to_s3_distro"
role = aws_iam_role.broker.id
policy = data.aws_iam_policy_document.rabbitmq_ec2_access_to_s3_distro.json
}
data "aws_iam_policy_document" "rabbitmq_ec2_access_to_s3_distro" {
statement {
effect = "Allow"
actions = [
"s3:GetObject",
"s3:GetObjectVersion"
]
resources = ["arn:aws:s3:::${var.distro_bucket}", "arn:aws:s3:::${var.distro_bucket}/*"]
}
}
resource "aws_instance" "rabbitmq_instance" {
iam_instance_profile = ${aws_iam_instance_profile.broker_instance_profile.name}
....
}
This sounds like a timing issue where cloud-init is executed before the EC2 profile is set/ready to use. In your cloud-init script, I would make a loop to run a particular AWS cli command or even use the metadata server to retrieve information about the IAM credentials of the EC2 instance.
As the documentation states, you receive the following response when querying the endpoint http://169.254.169.254/latest/meta-data/iam/security-credentials/iam_role_name:
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2017-05-17T15:09:54Z"
}
So your cloud-init/user-data script could wait until the Code attribute equals to Success and then proceed with the other operations.

How to create a Hashicorp Vault user using Terraform

I am trying to create a Vault user in Terraform but can't seem to find the appropriate command to do so. I've searched the Terraform Registry and also performed some online searches but all to no avail.
All I'm looking to do is create a user, using the corresponding Terraform command to the Vault CLI command below:
vault write auth/userpass/users/bob password="passworld123" policies="default"
Any suggestions?
#hitman126 I guess you can take use of 'vault' provider module and 'vault_auth_backend' resource block. I guess your code should look like something similar to below
terraform {
required_providers {
vault = {
source = "hashicorp/vault"
version = "3.5.0"
}
}
}
provider "vault" {
}
resource "vault_auth_backend" "example" {
type = "userpass"
}
resource "vault_generic_secret" "developer_sample_data" {
path = "secret/foo"
data_json = <<EOT
{
"username": "bob",
"password": "passworld123"
}
EOT
}
In above code block, path is one full logic path where we write given data.To write data into the "generic" secret backend mounted in Vault by default, this should be prefixed with 'secret/'.
This might not be a full-fledged solution, but you can try something like this
Solution-2 :
If you have installed vault in machine and you would like to achieve above use case using vault command alone(if you don't want to use terraform-vault provider), then you can try something below
create one small sh script with above vault command. (valut-write.sh)
touch vault-write.sh
let content of script can be similar to below
#!/bin/sh
vault write auth/userpass/users/bob password="passworld123" policies="default"
chmod +x vault-write.sh
Create a .tf file with null resource, local-exec provisioner and invoke this sh script.
touch vault.tf
contents of vault.tf file can be similar to below
terraform {
required_version = "~> 1.1.1"
}
resource "null_resource" "vault_write" {
provisioner "local-exec" {
command = "/bin/sh vault-write.sh"
}
}

How to Output Terraform Module Variable Names

I'm fairly new to Terraform and I have a question.
I have a bunch of terraform modules calling a main module to create a number of s3 buckets.
module "s3_1" {
source = "../modules/s3-arc"
ENVIRONMENT = var.ENV
bucket_name = var.s3_dep["one"]
}
module "s3_2" {
source = "../modules/s3-arc"
ENVIRONMENT = var.ENV
bucket_name = var.s3_dep["two"]
}
module "s3_3" {
source = "../modules/s3-arc"
ENVIRONMENT = var.ENV
bucket_name = var.s3_dep["three"]
}
It so happens that the policies are are being created separately, and so there appears to be a race condition resulting in a NoSuchBucket: The specified bucket does not exist error because the policies are being created first.
I feel like in order to resolve this, I need to add an explicit dependency using depends_on but I can't seem to figure out how to output the bucket names being created by modules s3-1, s3_2, and s3_3 so that I can add the depends_on under the policy section.
How do I output these bucket names please?
Inside your module you can declare an output value which returns some attribute of the S3 bucket, and optionally any other objects that contribute to the functionality of the bucket.
For example:
terraform {
required_providers {
aws = {
# I'm using resource types introduced in v4
# below, so we'll need at least that version.
source = "hashicorp/aws"
version = ">= 4.0.0"
}
}
}
variable "bucket_name" {
type = string
}
resource "aws_s3_bucket" "example" {
bucket = var.bucket_name
# ...
}
resource "aws_s3_bucket_acl" "example" {
bucket = aws_s3_bucket.example.bucket
acl = "private"
}
resource "aws_s3_bucket_versioning" "example" {
bucket = aws_s3_bucket.example.bucket
versioning_configuration {
status = "Enabled"
}
}
output "bucket" {
value = {
name = aws_s3_bucket.example.bucket
arn = aws_s3_bucket.example.arn
}
# The bucket won't be "ready to use" until
# these other resources are created, so
# these are "hidden dependencies" as described
# in the documentation for depends_on
depends_on = [
aws_s3_bucket_acl.example,
aws_s3_bucket_versioning.example,
]
}
Using depends_on with an output value means that any object which refers to this output value in the calling module indirectly depends on those other resources too, and so all three of the S3-related resources must be created completely before anything in the caller can make use of the S3 bucket.
When you separately declare the a policy for one of these buckets in the root module, you'd refer to the bucket name or ARN via the bucket output value, which therefore completes the necessary dependency edges to get a correct ordering:
module "s3_1" {
source = "../modules/s3-arc"
bucket_name = var.s3_dep["one"]
}
resource "aws_s3_bucket_policy" "example" {
# This reference to module.s3_1.bucket.name establishes
# the needed dependency relationships.
bucket = module.s3_1.bucket.name
policy = jsonencode({
# ...
})
}

How to create a AWS Cognito user with Terraform

I'd like to use Terraform to create AWS Cognito User Pool with one test user. Creating a user pool is quite straightforward:
resource "aws_cognito_user_pool" "users" {
name = "${var.cognito_user_pool_name}"
admin_create_user_config {
allow_admin_create_user_only = true
unused_account_validity_days = 7
}
}
However, I cannot find a resource that creates AWS Cognito user. It is doable with AWS Cli
aws cognito-idp admin-create-user --user-pool-id <value> --username <value>
Any idea on how to do it with Terraform?
In order to automate things, it can be done in terraform using a null_resource and local_exec provisioner to execute your aws cli command
e.g.
resource "aws_cognito_user_pool" "pool" {
name = "mypool"
}
resource "null_resource" "cognito_user" {
triggers = {
user_pool_id = aws_cognito_user_pool.pool.id
}
provisioner "local-exec" {
command = "aws cognito-idp admin-create-user --user-pool-id ${aws_cognito_user_pool.pool.id} --username myuser"
}
}
This isn't currently possible directly in Terraform as there isn't a resource that creates users in a user pool.
There is an open issue requesting the feature but no work has yet started on it.
As it is not possible to do that directly through Terraform in opposite to matusko solution I would recommend to use CloudFormation template.
In my opinion it is more elegant because:
it does not require additional applications installed locally
it can be managed by terraform as CF stack can be destroyed by terraform
Simple solution with template could look like below. Have in mind that I skipped not directly related files and resources like provider. Example also contains joining users with groups.
variables.tf
variable "COGITO_USERS_MAIL" {
type = string
description = "On this mail passwords for example users will be sent. It is only method I know for receiving password after automatic user creation."
}
cf_template.json
{
"Resources" : {
"userFoo": {
"Type" : "AWS::Cognito::UserPoolUser",
"Properties" : {
"UserAttributes" : [
{ "Name": "email", "Value": "${users_mail}"}
],
"Username" : "foo",
"UserPoolId" : "${user_pool_id}"
}
},
"groupFooAdmin": {
"Type" : "AWS::Cognito::UserPoolUserToGroupAttachment",
"Properties" : {
"GroupName" : "${user_pool_group_admin}",
"Username" : "foo",
"UserPoolId" : "${user_pool_id}"
},
"DependsOn" : "userFoo"
}
}
}
cognito.tf
resource "aws_cognito_user_pool" "user_pool" {
name = "cogito-user-pool-name"
}
resource "aws_cognito_user_pool_domain" "user_pool_domain" {
domain = "somedomain"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_group" "admin" {
name = "admin"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
user_init.tf
data "template_file" "application_bootstrap" {
template = file("${path.module}/cf_template.json")
vars = {
user_pool_id = aws_cognito_user_pool.user_pool.id
users_mail = var.COGNITO_USERS_MAIL
user_pool_group_admin = aws_cognito_user_group.admin.name
}
}
resource "aws_cloudformation_stack" "test_users" {
name = "${var.TAG_PROJECT}-test-users"
template_body = data.template_file.application_bootstrap.rendered
}
Sources
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpooluser.html
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudformation_stack
Example
Simple project based on:
Terraform,
Cognito,
Elastic Load Balancer,
Auto Scaling Group,
Spring Boot application
PostgreSQL DB.
Security check is made on ELB and Spring Boot.
This means that ELB can not pass not authorized users to application. And application can do further security check based on PostgreSQL roleswhich are mapped to Cognito roles.
Terraform Project and simple application:
https://github.com/test-aws-cognito
Docker image made out of application code:
https://hub.docker.com/r/testawscognito/simple-web-app
More information how to run it in terraform git repository's README.MD.
It should be noted that the aws_cognito_user resource is now supported in the AWS Terraform provider, as documented here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_user
Version 4.3.0 at time of writing.