Cloudformation S3bucket creation - amazon-s3

Here's the cloudformation template I wrote to create a simple S3 bucket, How do I specify the name of the bucket? Is this the right way?
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Simple S3 Bucket",
"Parameters": {
"OwnerService": {
"Type": "String",
"Default": "CloudOps",
"Description": "Owner or service name. Used to identify the owner of the vpc stack"
},
"ProductCode": {
"Type": "String",
"Default": "cloudops",
"Description": "Lowercase version of the product code (i.e. jem). Used for tagging"
},
"StackEnvironment": {
"Type": "String",
"Default": "stage",
"Description": "Lowercase version of the environment name (i.e. stage). Used for tagging"
}
},
"Mappings": {
"RegionMap": {
"us-east-1": {
"ShortRegion": "ue1"
},
"us-west-1": {
"ShortRegion": "uw1"
},
"us-west-2": {
"ShortRegion": "uw2"
},
"eu-west-1": {
"ShortRegion": "ew1"
},
"ap-southeast-1": {
"ShortRegion": "as1"
},
"ap-northeast-1": {
"ShortRegion": "an1"
},
"ap-northeast-2": {
"ShortRegion": "an2"
}
}
},
"Resources": {
"JenkinsBuildBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {
"Fn::Join": [
"-",
[
{
"Ref": "ProductCode"
},
{
"Ref": "StackEnvironment"
},
"deployment",
{
"Fn::FindInMap": [
"RegionMap",
{
"Ref": "AWS::Region"
},
"ShortRegion"
]
}
]
]
},
"AccessControl": "Private"
},
"DeletionPolicy": "Delete"
}
},
"Outputs": {
"DeploymentBucket": {
"Description": "Bucket Containing Chef files",
"Value": {
"Ref": "DeploymentBucket"
}
}
}
}

Here's a really simple Cloudformation template that creates an S3 bucket, including defining the bucketname.
AWSTemplateFormatVersion: '2010-09-09'
Description: create a single S3 bucket
Resources:
SampleBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: sample-bucket-0827-cc
You can also leave the "Properties: BucketName" lines off if you want AWS to name the bucket for you. Then it will look like $StackName-SampleBucket-$uniqueIdentifier.
Hope this helps.

Your code has the BucketName already specified:
"BucketName": {
"Fn::Join": [
"-",
[
{
"Ref": "ProductCode"
},
{
"Ref": "StackEnvironment"
},
"deployment",
{
"Fn::FindInMap": [
"RegionMap",
{
"Ref": "AWS::Region"
},
"ShortRegion"
]
}
]
]
},
The BucketName is a string, and since you are using 'Fn Join', it will be combined of the functions you are joining.
"The intrinsic function Fn::Join appends a set of values into a single value, separated by the specified delimiter. If a delimiter is the empty string, the set of values are concatenated with no delimiter."
Your bucket name if you don't change the defaults is:
cloudops-stage-deplyment-yourAwsRegion
If you change the default parameters, then both cloudops, and stage can be changed, deployment is hard coded, yourAWSRegion will be pulled from where the stack is running, and will be returned in short format via the Mapping.

To extend 'cloudquiz' answer, this is what it'd look in yaml format:
Resources:
SomeS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName:
Fn::Join: ["-", ["yourbucketname", {'Fn::Sub': '${AWS::Region}'}, {'Fn::Sub': '${Stage}'}]]

Related

How is S3 bucket name being derived in CloudFormation?

I've this cloudformation script template.js that creates a bucket. I'm bit unsure how the bucket name is being assembled.
Assuming my stackname is my-service I'm getting bucket name created as my-service-s3bucket-1p3s4szy5bomf
I want to know how this name was derived
I also want to get rid of that arn at the end. -1p3s4szy5bomf
Can I skip Outputs at the end, Not sure what they do
Code in template.js
var stackTemplate = {
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "with S3",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "bba483af-4ae6-4d3d-b37d-435f66c42e44"
}
}
},
"S3BucketAccessPolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "S3BucketAccessPolicy",
"Roles": [
{
"Ref": "IAMServiceRole"
}
],
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:List*"
],
"Resource": [
{
"Fn::Sub": [
"${S3BucketArn}",
{
"S3BucketArn": {
"Fn::GetAtt": ["S3Bucket", "Arn"]
}
}
]
},
{
"Fn::Sub": [
"${S3BucketArn}/*",
{
"S3BucketArn": {
"Fn::GetAtt": ["S3Bucket", "Arn"]
}
}
]
}
]
}
]
}
}
}
},
"Outputs": {
"s3Bucket": {
"Description": "The created S3 bucket.",
"Value": {
"Ref": "S3Bucket"
},
"Export": {
"Name": {
"Fn::Sub": "${AWS::StackName}-S3Bucket"
}
}
},
"s3BucketArn": {
"Description": "The ARN of the created S3 bucket.",
"Value": {
"Fn::GetAtt": ["S3Bucket", "Arn"]
},
"Export": {
"Name": {
"Fn::Sub": "${AWS::StackName}-S3BucketArn"
}
}
}
}
};
stackUtils.assembleStackTemplate(stackTemplate, module);
I want to know how this name was derived
If you don't specify a name for your bucket, CloudFormation generate a new one based on the pattern $name-of-stack-s3bucket-$generatedId
from documentation https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3-bucket.html
BucketName
A name for the bucket. If you don't specify a name, AWS CloudFormation generates a unique ID and uses that ID for the bucket name.
I also want to get rid of that arn at the end. -1p3s4szy5bomf
You can assign a name of you bucket, but AWS recommand to let it empty to generate a new one, to avoid creation with the same name (stackset...) by CloudFormation example :
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"BucketName": "DesiredNameOfBucket" <==
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "bba483af-4ae6-4d3d-b37d-435f66c42e44"
}
}
},
Can I skip Outputs at the end, Not sure what they do
It is used to have the information, name and the ARN of the bucket created, if you want you can delete the Outputs part from your template

REST dataset for Copy Activity Source give me error Invalid PaginationRule

My Copy Activity is setup to use a REST Get API call as my source. I keep getting Error Code 2200 Invalid PaginationRule RuleKey=supportRFC5988.
I can call the GET Rest URL using the Web Activity, but this isn't optimal as I then have to pass the output to a stored procedure to load the data to the table. I would much rather use the Copy Activity.
Any ideas why I would get an Invalid PaginationRule error on a call?
I'm using a REST Linked Service with the following properties:
Name: Workday
Connect via integration runtime: link-unknown-self-hosted-ir
Base URL: https://wd2-impl-services1.workday.com/ccx/service
Authentication type: Basic
User name: Not telling
Azure Key Vault for password
Server Certificate Validation is enabled
Parameters: Name:format Type:String Default value:json
Datasource:
"name": "Workday_Test_REST_Report",
"properties": {
"linkedServiceName": {
"referenceName": "Workday",
"type": "LinkedServiceReference",
"parameters": {
"format": "json"
}
},
"folder": {
"name": "Workday"
},
"annotations": [],
"type": "RestResource",
"typeProperties": {
"relativeUrl": "/customreport2/company1/person%40company.com/HIDDEN_BI_RaaS_Test_Outbound"
},
"schema": []
}
}
Copy Activity
{
"name": "Copy Test Workday REST API output to a table",
"properties": {
"activities": [
{
"name": "Copy data1",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "RestSource",
"httpRequestTimeout": "00:01:40",
"requestInterval": "00.00:00:00.010",
"requestMethod": "GET",
"paginationRules": {
"supportRFC5988": "true"
}
},
"sink": {
"type": "SqlMISink",
"tableOption": "autoCreate"
},
"enableStaging": false
},
"inputs": [
{
"referenceName": "Workday_Test_REST_Report",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "Destination_db",
"type": "DatasetReference",
"parameters": {
"schema": "ELT",
"tableName": "WorkdayTestReportData"
}
}
]
}
],
"folder": {
"name": "Workday"
},
"annotations": []
}
}
Well after posting this, I noticed that in the copy activity code there is a nugget about "supportRFC5988": "true" I switched the true to false, and everything just worked for me. I don't see a way to change this in the Copy Activity GUI
Editing source code and setting this option to false helped!

JSON schema validation for array that can have items with different keys

I am trying to find a schema that will validate a given array having multiple items. The items can have 2 possible values for a key. But all the items should have the same value as the key.
If the 2 possible values are 'primary' and 'secondary', then all the keys should be 'primary' or all the keys should be 'secondary'. oneOf does not seem to be working in this case.
Is there a solution to this? Any help is appreciated. Thank you.
Schema:
{
type: "object",
properties: {
values: {
type: "array",
uniqueItems: true,
minItems: 1,
maxItems: 100,
items: {
anyOf: [
{ $ref: "#/definitions/primaryObj"} ,
{ $ref: "#/definitions/secondaryObj"}
]
}
},
},
definitions: {
primaryObj: {
type: "object",
required: ["id", "primary"],
properties: {
id: {
type: "string",
description: "The id",
},
primary: {
type: "string",
description: "primary value",
},
},
},
secondaryObj: {
type: "object",
required: ["id", "secondary"],
properties: {
id: {
type: "string",
description: "The id",
},
secondary: {
type: "string",
description: "secondary value",
},
},
},
},
required: ["values"],
}
Sample Input -
Input 1 - should PASS validation
{
"values": [
{
"id": "1",
"primary" : "hello"
},
{
"id": "2",
"primary" : "world"
}
]
}
Input 2 - should PASS validation
{
"values": [
{
"id": "1",
"secondary" : "hello"
},
{
"id": "2",
"secondary" : "world"
}
]
}
Input 3 - should FAIL validation
{
"values": [
{
"id": "1",
"primary" : "hello"
},
{
"id": "2",
"secondary" : "world"
}
]
}
You were pretty close here. There are two changes you need to make in order to get the validation you want. (I'm going to be assuming you're using draft-07, although this applies to newer drafts also)
First, let's take the top section.
The anyOf keyword is specified as follows:
An instance validates successfully against this keyword if it
validates successfully against at least one schema defined by this
keyword's value.
https://datatracker.ietf.org/doc/html/draft-handrews-json-schema-validation-01#section-6.7.2
You only want ONE of the referenced subschemas to be true!
oneOf is defined similar:
An instance validates successfully against this keyword if it
validates successfully against exactly one schema defined by this
keyword's value.
https://datatracker.ietf.org/doc/html/draft-handrews-json-schema-validation-01#section-6.7.3
So we change your schema to check that only ONE of the references is valid...
"items": {
"oneOf": [
{
"$ref": "#/definitions/primaryObj"
},
{
"$ref": "#/definitions/secondaryObj"
}
]
}
But this still is incorrect. Let's refresh what items does.
This keyword determines how child instances validate for arrays,
and does not directly validate the immediate instance itself.
If "items" is a schema, validation succeeds if all elements in the
array successfully validate against that schema.
https://datatracker.ietf.org/doc/html/draft-handrews-json-schema-validation-01#section-6.4.1
It LOOKS like we got this right, however the first paragraph in the quote above attempts to convey that items applies its subschema value to each item in the array, and not "as a whole array".
What our above subschema is doing, is checking each item in the array by itself, in isolation of the other items, that they are "primary" or "secondary" as you define.
What we WANT to do, is check that ALL items in the array are either "primary" or "secondary". To achive this, we need to move the oneOf outside items.
"oneOf": [
{
"items": {
"$ref": "#/definitions/primaryObj"
}
},
{
"items": {
"$ref": "#/definitions/secondaryObj"
}
}
]
Almost there! This almost works, but we still find mixing primary and secondary doesn't cause validation to fail.
Let's check our assumptions. We assume that validation should fail when the instance data has primary and secondary in objects in the array. We can test this by changing one of the subschemas in our oneOf to false, forcing the first subschema definition (primary) to be checked. It should check that all items in the array are primary, and any secondary should cause validation failure.
We have to remember, JSON Schema is constraints based. Anything that isn't constrained, is allowed.
If we look at the definition for primaryObj, it requires and defines the validation for id and primary, but this doesn't inherintly prevent additioanl keys in the object. To do that, we need to add "additionalProperties": false` (to both definitions).
The end result looks like this. You can check out the live demo at https://jsonschema.dev/s/3ZKBp
{
"$schema": "http://json-schema.org/draft-07/schema",
"type": "object",
"properties": {
"values": {
"type": "array",
"uniqueItems": true,
"minItems": 1,
"maxItems": 100,
"oneOf": [
{
"items": {
"$ref": "#/definitions/primaryObj"
}
},
{
"items": {
"$ref": "#/definitions/secondaryObj"
}
}
]
}
},
"definitions": {
"primaryObj": {
"type": "object",
"required": [
"id",
"primary"
],
"properties": {
"id": {
"type": "string",
"description": "The id"
},
"primary": {
"type": "string",
"description": "primary value"
}
},
"additionalProperties": false
},
"secondaryObj": {
"type": "object",
"required": [
"id",
"secondary"
],
"properties": {
"id": {
"type": "string",
"description": "The id"
},
"secondary": {
"type": "string",
"description": "secondary value"
}
},
"additionalProperties": false
}
},
"required": [
"values"
]
}

jfrog artifactory "invalid character '"' after object key:value pair"

So I have this spec file in artifactory to remove folders (with artifacts inside) older than 3 months in more than one repository (3 in this example).
{
"files": [{
"aql": {
"items.find": {
"$or": [{
"$and": [{
"repo": "repo1",
"path": "com/domain/repo1",
"created": {
"$before": "3mo"
}
"type": "folder",
"name": {"$match":"20*"}
}],
"$and": [{
"repo": "repo2",
"path": "com/domain/repo2",
"created": {
"$before": "3mo"
}
"type": "folder",
"name": {"$match":"20*"}
}],
"$and": [{
"repo": "repo3",
"path": "com/domain/repo3",
"created": {
"$before": "3mo"
}
"type": "folder",
"name": {"$match":"20*"}
}]
}]
}
}
}]
}
But I´m getting: [Error] invalid character '"' after object key:value pair
How can I tell what is the (") that is causing the error? is not quite descriptive the output like in some other languajes that tells you the line number at least.
On the other hand, if I use following spec for a single repository, it works like a charm.
thank you!
{
"files": [{
"aql": {
"items.find": {
"repo": "repo5",
"path": "com/domain/repo5",
"created": {
"$before": "3mo"
},
"type":"folder",
"name": {"$match":"20*"}
}
}
}]}
You are missing a comma after all of the "created" key/value pairs:
"created": {
"$before": "3mo"
} <-- missing a comma here
"type": "folder",
Please notice that your working example has the comma in the right place.

Preventing dependent property validation when the parent property does not exist

I am new to JSON schemas. I have a property (property1) that is dependent on another property (property2), which in turn is dependent on a third property (property3). I am trying to figure out how to prevent the schema from validating property1 if property2 doesn't exist. I am using the Python jsonschema module for validating.
I have a simple schema with three properties: species, otherDescription, and otherDescriptionDetail. The rules I'm trying to enforce are:
1) if species = "Human", otherDescription is required.
2) if species = "Human" and otherDescription != "None", otherDescriptionDetail is required.
3) if species != "Human", neither of the other two fields is required.
My test JSON correctly fails validation if species is "Human" and otherDescription doesn't exist, but it also reports that otherDescriptionDetail is a required property even though at this point it shouldn't be because there is no otherDescription value to compare it against. Is it possible to implement this logic with a JSON schema?
This is my schema:
"$schema": "http://json-schema.org/draft-07/schema#",
"$id":"http://example.com/test_schema.json",
"title": "annotations",
"description": "Validates file annotations",
"type": "object",
"properties": {
"species": {
"description": "Type of species",
"anyOf": [
{
"const": "Human",
"description": "Homo sapiens"
},
{
"const": "Neanderthal",
"description": "Cave man"
}
]
},
"otherDescription": {
"type": "string"
},
"otherDescriptionDetail": {
"type": "string"
}
},
"required": [
"species"
],
"allOf": [
{
"if": {
"properties": {
"species": {
"const": "Human"
}
}
},
"then": {
"required": ["otherDescription"]
}
},
{
"if": {
"allOf": [
{
"properties": {
"species": {
"const": "Human"
},
"otherDescription": {
"not": {"const": "None"}
}
}
}
]
},
"then": {
"required": ["otherDescriptionDetail"]
}
}
]
}
My test JSON is:
{
"species": "Human"
}
The output that I want:
0: 'otherDescription' is a required property
The output that I am getting:
0: 'otherDescription' is a required property
1: 'otherDescriptionDetail' is a required property
Any help would be greatly appreciated.
You need to defined otherDescription as a required property insilde allOf. Otherwise allOf block will pass even if otherDescription not available.
"if": {
"allOf": [
{
"properties": {
"species": {
"const": "Human"
},
"otherDescription": {
"not": {"const": "None"}
}
},
"required": ["otherDescription"]
}
]
},
"then": {
"required": ["otherDescriptionDetail"]
}