Terraform cannot read from remote state - amazon-s3

Terraform version
v0.12.1
AWS provider version
v2.16.0
I've Terraform workspace configured, as for now my workspace is pointing to dev where I've a tfstate file for my VPCs and Subnets and a different one for my security groups, however when I'm trying to refer vpc_id from my vpc remote tfstate into my security group then I get below error message
No stored state was found for the given workspace in the given backend.
My s3 bucket looks like below
nonprod-us-east-1
|-- env
|-- dev
|-- vpc_subnet/tfstate
|-- security_group/tfstate
Terraform Configuration Files
Security-Group tf config
terraform {
backend "s3"{
# Configuration will be injected by environment variables.
}
}
provider "aws" {
region = "${var.region}"
}
data "terraform_remote_state" "vpc_subnet" {
backend = "s3"
config = {
bucket = "nonprod-us-east-1"
key = "vpc_subnet/tfstate"
region = "us-east-1"
}
}
vpc_id = "${data.terraform_remote_state.vpc_subnet.outputs.vpc_id}"
And I've verified that my vpc_subnet/tfstate oputputs has vpc_id
Outputs from VPC subnet tf state
outputs": {
"private_subnet_cidr_blocks": {
"value": [
"10.0.3.0/24",
"10.0.4.0/24",
"10.0.5.0/24"
],
"type": [
"tuple",
[
"string",
"string",
"string"
]
]
},
"private_subnet_ids": {
"value": [
"subnet-042a16dd291e90add",
"subnet-02e8322d996968a3f",
"subnet-078f525c24015b364"
],
"type": [
"tuple",
[
"string",
"string",
"string"
]
]
},
"public_subnet_cidr_blocks": {
"value": [
"10.0.0.0/24",
"10.0.1.0/24",
"10.0.2.0/24"
],
"type": [
"tuple",
[
"string",
"string",
"string"
]
]
},
"public_subnet_ids": {
"value": [
"subnet-0ba92a28f6e8ddd95",
"subnet-08efcb80bed22f4e2",
"subnet-0b641797bfe207a0b"
],
"type": [
"tuple",
[
"string",
"string",
"string"
]
]
},
"vpc_id": {
"value": "vpc-0bb7595ff05fed581",
"type": "string"
}
}
Expected Behavior
It should be able to read vpc_id from remote tf state location.
Actual Behavior
Failing to read output from remote tf state

Finally sorted it out, it comes out to be an issue with bucket key, since I'm using Terraform workspace so the tfstate files are created under folder env:/dev/vpc_subnet/tfstate, after correcting bucket key, it's able to resolve the tfstate files.

Related

Set Subnet ID and EC2 Key Name in EMR Cluster Config via Step Functions

As of November 2019 AWS Step Function has native support for orchestrating EMR Clusters. Hence we are trying to configure a Cluster and run some jobs on it.
We could not find any documentation on how to set the SubnetId as well as the Key Name used for the EC2 instances in the cluster. Is there any such possibility?
As of now our create cluster step looks as following:
"States": {
"Create an EMR cluster": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
"Parameters": {
"Name": "TestCluster",
"VisibleToAllUsers": true,
"ReleaseLabel": "emr-5.26.0",
"Applications": [
{ "Name": "spark" }
],
"ServiceRole": "SomeRole",
"JobFlowRole": "SomeInstanceProfile",
"LogUri": "s3://some-logs-bucket/logs",
"Instances": {
"KeepJobFlowAliveWhenNoSteps": true,
"InstanceFleets": [
{
"Name": "MasterFleet",
"InstanceFleetType": "MASTER",
"TargetOnDemandCapacity": 1,
"InstanceTypeConfigs": [
{
"InstanceType": "m3.2xlarge"
}
]
},
{
"Name": "CoreFleet",
"InstanceFleetType": "CORE",
"TargetSpotCapacity": 2,
"InstanceTypeConfigs": [
{
"InstanceType": "m3.2xlarge",
"BidPriceAsPercentageOfOnDemandPrice": 100 }
]
}
]
}
},
"ResultPath": "$.cluster",
"End": "true"
}
}
As soon as we try to add "SubnetId" key in any of the subobjects in Parameters, or in Parameter itself we get the error:
Invalid State Machine Definition: 'SCHEMA_VALIDATION_FAILED: The field "SubnetId" is not supported by Step Functions at /States/Create an EMR cluster/Parameters' (Service: AWSStepFunctions; Status Code: 400; Error Code: InvalidDefinition;
Referring to the SF docs on the emr integration we can see that createCluster.sync uses the emr API RunJobFlow. In RunJobFlow we can specify the Ec2KeyName and Ec2SubnetId located at the paths $.Instances.Ec2KeyName and $.Instances.Ec2SubnetId.
With that said I managed to create a State Machine with the following definition (on a side note, your definition had a syntax error with "End": "true", which should be "End": true)
{
"Comment": "A Hello World example of the Amazon States Language using Pass states",
"StartAt": "Create an EMR cluster",
"States": {
"Create an EMR cluster": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
"Parameters": {
"Name": "TestCluster",
"VisibleToAllUsers": true,
"ReleaseLabel": "emr-5.26.0",
"Applications": [
{
"Name": "spark"
}
],
"ServiceRole": "SomeRole",
"JobFlowRole": "SomeInstanceProfile",
"LogUri": "s3://some-logs-bucket/logs",
"Instances": {
"Ec2KeyName": "ENTER_EC2KEYNAME_HERE",
"Ec2SubnetId": "ENTER_EC2SUBNETID_HERE",
"KeepJobFlowAliveWhenNoSteps": true,
"InstanceFleets": [
{
"Name": "MasterFleet",
"InstanceFleetType": "MASTER",
"TargetOnDemandCapacity": 1,
"InstanceTypeConfigs": [
{
"InstanceType": "m3.2xlarge"
}
]
},
{
"Name": "CoreFleet",
"InstanceFleetType": "CORE",
"TargetSpotCapacity": 2,
"InstanceTypeConfigs": [
{
"InstanceType": "m3.2xlarge",
"BidPriceAsPercentageOfOnDemandPrice": 100
}
]
}
]
}
},
"ResultPath": "$.cluster",
"End": true
}
}
}

Issues connecting apache drill with aws s3 region which only supports Signature Version 4

I'm having issues connecting to s3 using Apache Drill storage plugin
this is the error i receive
Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 53660B0E11F34387, AWS Error Code: null, AWS Error Message: Bad Request
core-site.xml
<property>
<name>fs.s3a.access.key</name>
<value>accesskey</value>
</property>
<property>
<name>fs.s3a.secret.key</name>
<value>secretkey</value>
</property>
<property>
<name>fs.s3a.endpoint</name>
<value>s3.ap-south-1.amazonaws.com</value>
</property>
storage plugin
{
"type": "file",
"connection": "s3a://bucket/",
"config": null,
"workspaces": {
"tmp": {
"location": "/tmp",
"writable": true,
"defaultInputFormat": null,
"allowAccessOutsideWorkspace": false
},
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null,
"allowAccessOutsideWorkspace": false
}
},
"formats": {
"psv": {
"type": "text",
"extensions": [
"tbl"
],
"delimiter": "|"
},
"csv": {
"type": "text",
"extensions": [
"csv"
],
"delimiter": ","
},
"tsv": {
"type": "text",
"extensions": [
"tsv"
],
"delimiter": "\t"
},
"parquet": {
"type": "parquet"
},
"json": {
"type": "json",
"extensions": [
"json"
]
},
"avro": {
"type": "avro"
},
"sequencefile": {
"type": "sequencefile",
"extensions": [
"seq"
]
},
"csvh": {
"type": "text",
"extensions": [
"csvh"
],
"extractHeader": true,
"delimiter": ","
}
},
"enabled": true
}
this works fine with us-east-1 which supports both signature 2 & 4 versions, the problem is specific to new regions which only support signature version 4.
any suggestions on how to fix this ?
I had the same error occurred while trying to connect Apache Drill version 1.16.0 to Signature Version 4 only S3 bucket in MEA region.
Logs indicated illegal location constraint though the bucket url had location included.
Upgraded Drill to 1.20.0 and Zookeeper to 3.5.7 and able to authenticate successfully. Hope this helps!

Azure SQL Password not meet complexity when coming from keyvault but not when coming from variable

I'm trying to create an Azure SQL Server in Azure with json ARM.
In my json, when I put a password into a variable, the installation is ok.
When I get the same password from a keyvault, it doesn't meet the complexity policy.
My template is valid but the error message appear when creating sql ressource
Password validation failed. The password does not meet policy requirements because it is not complex enough.
The password I use is:
P#ssw0rd01isCompleX
I think I have configured the json properly, it doesn't work.
I have removed the call to the keyvault in the json parameter to let Visual Studio create it for me...same result.
I have try different password.
I'm working with Visual Studio, so I have removed the call to the keyvault to let Visual Studio add it for me....same result
The keyvault is set to Enable Access to Azure Resource Manager for Template.
The output of the deploiement show me blank value for the password, maybe it's normal, maybe it's the symptom....
17:51:46 - Name Type Value
17:51:46 - ===============
17:51:46 - environmentName String dev
17:51:46 - adminlogin String adminlogin
17:51:46 - apv-eun-dev-sql SecureString
17:51:46 - utcValue String 2019-05-16 T15:51:40 +00:00
Do you have an idea about the cause of this ?
json file:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"EnvironmentName": {
"type": "string",
"allowedValues": [
"prod",
"pprd",
"uat",
"dev"
]
},
"adminlogin": {
"type": "string"
},
"apv-eun-dev-sql": {
"type": "securestring"
},
"utcValue": {
"type": "string",
"defaultValue": "[utcNow('yyyy-MM-dd THH:mm:ss zzzz')]"
}
},
"variables": {
},
"resources": [
{
"apiVersion": "2015-05-01-preview",
"type": "Microsoft.Sql/servers",
"location": "[resourceGroup().location]",
"name": "[concat('apv-eun-', parameters('EnvironmentName'),'-sql-001')]",
"properties": {
"administratorLogin": "parameters('adminlogin')",
"administratorLoginPassword": "parameters('apv-eun-dev-sql')",
"version": "12.0"
},
"tags": { "ONEData": "Rules" }
}
],
"outputs": {}
}
json parameters file:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"EnvironmentName": {
"value": "dev"
},
"adminlogin": {
"value": "adminlogin"
},
"apv-eun-dev-sql": {
"reference": {
"keyVault": {
"id": "/subscriptions/XXX/resourceGroups/XXX/providers/Microsoft.KeyVault/vaults/apv-eun-dev-akv-001"
},
"secretName": "apv-eun-dev-sql"
}
}
}
}
Am not sure but it seems to be a syntax problem.
In your json file, you have :
"administratorLogin": "parameters('adminlogin')",
"administratorLoginPassword": "parameters('apv-eun-dev-sql')"
While it should be :
"administratorLogin": "[parameters('adminlogin')]",
"administratorLoginPassword": "[parameters('apv-eun-dev-sql')]"
Sources :
https://github.com/rjmax/ArmExamples/blob/master/keyvaultexamples/KeyVaultUse.parameters.json
https://github.com/rjmax/ArmExamples/blob/master/keyvaultexamples/KeyVaultUse.json
https://learn.microsoft.com/fr-fr/azure/azure-resource-manager/resource-manager-keyvault-parameter

Unable to access S3 from EC2 Instance in Cloudformation -- A client error (301) occurred when calling the HeadObject operation: Moved Permanently

I'm trying to download a file from an S3 bucket to an instance through the userdata property of the instance. However, I get the error:
A client error (301) occurred when calling the HeadObject operation:
Moved Permanently.
I use an IAM Role, Managed Policy, and Instance Profile to give the instance accessibility to the s3 bucket:
"Role": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"s3.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/",
"ManagedPolicyArns": [
{
"Ref": "ManagedPolicy"
}
]
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "069d4411-2718-400f-98dd-529bb95fd531"
}
}
},
"RolePolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "S3Download",
"PolicyDocument": {
"Statement": [
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
},
"Roles": [
{
"Ref": "Role"
}
]
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "babd8869-948c-4b8a-958d-b1bff9d3063b"
}
}
},
"InstanceProfile": {
"Type": "AWS::IAM::InstanceProfile",
"Properties": {
"Path": "/",
"Roles": [
{
"Ref": "Role"
}
]
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "890c4df0-5d25-4f2c-b81e-05a8b8ab37c4"
}
}
},
And I attempt to download the file using this line in the userdata property:
aws s3 cp s3://mybucket/login.keytab
destination_directory/
Any thoughts as to what is going wrong? I can download the file successfully if I make it public then use wget from the command line, but for some reason the bucket/file can't be found when using cp and the file isn't publicly accessible.
Moved Permanently normally indicates that you are being redirected to the location of the object. This is normally because the request is being sent to an endpoint that is in a different region.
Add a --region parameter where the region matches the bucket's region. For example:
aws s3 cp s3://mybucket/login.keytab destination_directory/ --region ap-southeast-2
you can modify /root/.aws/credentials file and add region like region = ap-southeast-2

Take Scheduled EBS Snapshots using Cloud Watch and Cloud Formation

The task seems to be simple: I want to take scheduled EBS snapshots of my EBS volumes on a daily basis. According to the documentation, CloudWatch seems to be the right place to do that:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/TakeScheduledSnapshot.html
Now I want to create such a scheduled rule when launching a new stack with CloudFormation. For this, there is a new resource type AWS::Events::Rule:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-rule.html
But now comes the tricky part: How can I use this resource type to create a built-in target that creates my EBS snapshot, like described in the above scenario?
I'm pretty sure there is way to do it, but I can't help myself right now. My resource template looks like this at the moment:
"DailyEbsSnapshotRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "creates a daily snapshot of EBS volume (8 a.m.)",
"ScheduleExpression": "cron(0 8 * * ? *)",
"State": "ENABLED",
"Targets": [{
"Arn": { "Fn::Join": [ "", "arn:aws:ec2:", { "Ref": "AWS::Region" }, ":", { "Ref": "AWS::AccountId" }, ":volume/", { "Ref": "EbsVolume" } ] },
"Id": "SomeId1"
}]
}
}
Any ideas?
I found the solution to this over on a question about how to do this in terraform. The solution in plain CloudFormation JSON seems to be.
"EBSVolume": {
"Type": "AWS::EC2::Volume",
"Properties": {
"Size": 1
}
},
"EBSSnapshotRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Principal": {
"Service": ["events.amazonaws.com", "ec2.amazonaws.com"]
},
"Action": ["sts:AssumeRole"]
}]
},
"Path": "/",
"Policies": [{
"PolicyName": "root",
"PolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot"
],
"Resource": "*"
} ]
}
}]
}
},
"EBSSnapshotRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "creates a daily snapshot of EBS volume (1 a.m.)",
"ScheduleExpression": "cron(0 1 * * ? *)",
"State": "ENABLED",
"Name": {"Ref": "AWS::StackName"},
"RoleArn": {"Fn::GetAtt" : ["EBSSnapshotRole", "Arn"]},
"Targets": [{
"Arn": {
"Fn::Join": [
"",
[
"arn:aws:automation:",
{"Ref": "AWS::Region"},
":",
{"Ref": "AWS::AccountId"},
":action/",
"EBSCreateSnapshot/EBSCreateSnapshot_",
{"Ref": "AWS::StackName"}
]
]
},
"Input": {
"Fn::Join": [
"",
[
"\"arn:aws:ec2:",
{"Ref": "AWS::Region"},
":",
{"Ref": "AWS::AccountId"},
":volume/",
{"Ref": "EBSVolume"},
"\""
]
]
},
"Id": "EBSVolume"
}]
}
}
Unfortunately, it is not (yet) possible to set up scheduled EBS snapshots via CloudWatch Events within a CloudFormation stack.
It is a bit hidden in the docs: http://docs.aws.amazon.com/AmazonCloudWatchEvents/latest/APIReference/API_PutTargets.html
Note that creating rules with built-in targets is supported only in the AWS Management Console.
And "EBSCreateSnapshot" is one of these so-called "built-in targets".
Amazon seem to have removed their "built-in" targets and now its become possible to create Cloudwatch rules to schedule EBS snapshots.
First you must create a rule, which will be used to attach targets to.
Replace XXXXXXXXXXXXX with your aws account-id
aws events put-rule \
--name create-disk-snapshot-for-ec2-instance \
--schedule-expression 'rate(1 day)' \
--description "Create EBS snapshot" \
--role-arn arn:aws:iam::XXXXXXXXXXXXX:role/AWS_Events_Actions_Execution
Then you simply add your targets (up to 10 targets allowed per rule).
aws events put-targets \
--rule create-disk-snapshot-for-ec2-instance \
--targets "[{ \
\"Arn\": \"arn:aws:automation:eu-central-1:XXXXXXXXXXXXX:action/EBSCreateSnapshot/EBSCreateSnapshot_mgmt-disk-snapshots\", \
\"Id\": \"xxxx-yyyyy-zzzzz-rrrrr-tttttt\", \
\"Input\": \"\\\"arn:aws:ec2:eu-central-1:XXXXXXXXXXXXX:volume/<VolumeId>\\\"\" \}]"
There's a better way to automate EBS snapshots these days, using DLM (Data Lifecycle Manager). It's also available through Cloudformation. See these for details:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dlm-lifecyclepolicy.html