Setting API Key required to true using serverless.template in AWS API Gateway - asp.net-core

I am deploying an ASP.Net Core project on AWS Lambda and I am struggling with making the API Key required.
Here is my Json template:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Transform" : "AWS::Serverless-2016-10-31",
"Description" : "An AWS Serverless Application that uses the ASP.NET Core framework running in Amazon Lambda.",
"Parameters" : {
"ShouldCreateBucket" : {
"Type" : "String",
"AllowedValues" : ["true", "false"],
"Description" : "If true then the S3 bucket that will be proxied will be created with the CloudFormation stack."
},
"BucketName" : {
"Type" : "String",
"Description" : "Name of S3 bucket that will be proxied. If left blank a name will be generated.",
"MinLength" : "0"
}
},
"Conditions" : {
"CreateS3Bucket" : {"Fn::Equals" : [{"Ref" : "ShouldCreateBucket"}, "true"]},
"BucketNameGenerated" : {"Fn::Equals" : [{"Ref" : "BucketName"}, ""]}
},
"Resources" : {
"AspNetCoreFunction" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"Handler": "SmartClockAPI::SmartClockAPI.LambdaEntryPoint::FunctionHandlerAsync",
"Runtime": "dotnetcore2.1",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [ "AWSLambdaFullAccess","AmazonCognitoPowerUser","AmazonAPIGatewayAdministrator"],
"Environment" : {
"Variables" : {
"AppS3Bucket" : { "Fn::If" : ["CreateS3Bucket", {"Ref":"Bucket"}, { "Ref" : "BucketName" } ] }
}
},
"Events": {
"PutResource": {
"Type": "Api",
"Properties": {
"Path": "/{proxy+}",
"Method": "ANY"
}
}
}
}
},
"BasicUsagePlan" : {
"Type" : "AWS::ApiGateway::UsagePlan",
"Properties" : {
"UsagePlanName" : "Basic plan",
"Quota" : {
"Limit" : 100,
"Period" : "MONTH"
},
"Throttle" : {
"RateLimit" : 10,
"BurstLimit" : 10
}
}
},
"Bucket" : {
"Type" : "AWS::S3::Bucket",
"Condition" : "CreateS3Bucket",
"Properties" : {
"BucketName" : { "Fn::If" : ["BucketNameGenerated", {"Ref" : "AWS::NoValue" }, { "Ref" : "BucketName" } ] }
}
}
},
"Outputs" : {
"ApiURL" : {
"Description" : "API endpoint URL for Prod environment",
"Value" : { "Fn::Sub" : "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/" }
},
"S3ProxyBucket" : {
"Value" : { "Fn::If" : ["CreateS3Bucket", {"Ref":"Bucket"}, { "Ref" : "BucketName" } ] }
}
}
}
What I am trying to achieve is setting this value to true from the Json template.
I was expecting some extra property for Proxy where I can specify this value.
Any ideas?

Use AWS::ApiGateway::UsagePlanKey
{
"Type" : "AWS::ApiGateway::UsagePlanKey",
"Properties" : {
"KeyId" : String,
"KeyType" : String,
"UsagePlanId" : String
}
}
Add a DependsOn to ApiKey, ApiUsagePlan, and ApiUsagePlanKey to ensure they were created in the correct order. This is a nice working example

Related

How to use springdoc with #PostMapping(consumes = MediaType.APPLICATION_FORM_URLENCODED_VALUE) and #RequestParam

I'm upgrading a project from SpringFox to SpringDoc v1.6.12 and I struggle to make the new code work for the following method of my RestController:
#PostMapping(path = TASK_MAPPING_PATH, consumes = MediaType.APPLICATION_FORM_URLENCODED_VALUE)
public ResponseEntity<String> loadTask(
#RequestParam String applicationId,
#RequestParam String businessId,
#RequestParam boolean directLink
) {[...]}
The particularity of this method is that it should encode its parameters in the body since the Content-Type application/x-www-form-urlencoded is used.
But when I browse the url https://localhost:8443/v3/api-docs, the generated code is the following:
"/api/enrolment/task" : {
"post" : {
"operationId" : "loadTask",
"parameters" : [ {
"in" : "query",
"name" : "applicationId",
"required" : true,
"schema" : {
"type" : "string"
}
}, {
"in" : "query",
"name" : "businessId",
"required" : true,
"schema" : {
"type" : "string"
}
}, {
"in" : "query",
"name" : "directLink",
"required" : true,
"schema" : {
"type" : "boolean"
}
} ],
"responses" : {
[...]
},
"summary" : [...],
"tags" : [...]
}
},
All of the applicationId, businessId and directLink parameters are passed in the URL instead of the request body as expected.
I would have expected the following openApi definition instead:
"/api/enrolment/task" : {
"post" : {
"operationId" : "loadTask",
"requestBody" : {
"content" : {
"application/x-www-form-urlencoded" : {
"schema" : {
"type" : "object",
"properties" : {
"applicationId" : {
"type" : "string"
},
"businessId" : {
"type" : "string"
},
"directLink" : {
"type" : "boolean"
}
},
"required" : [ "applicationId", "businessId", "directLink" ]
}
}
}
},
"responses" : {
[...]
},
"summary" : [...],
"tags" : [...]
}
},
Does anyone ever had the same issue ?
Does anyone knows the solution to my problem ?
Thanks.

Mapping ElasticSearch apache module field

I am new to ES and I am facing a little problem I am struggling with.
I integrated metricbeat apache module with ES and the it works fine.
The problem is that metricbeat apache module reports the KB of web traffic of apache (field apache.status.total_kbytes), instead I would like to create my own field, the name of which would be "apache.status.total_mbytes).
I am trying to create a new mapping via Dev Console using the followind api commands:
PUT /metricbeat-7.2.0/_mapping
{
"settings":{
},
"mappings" : {
"apache.status.total_mbytes" : {
"full_name" : "apache.status.total_mbytes",
"mapping" : {
"total_mbytes" : {
"type" : "long"
}
}
}
}
}
Still ES returns the following error:
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "Root mapping definition has unsupported parameters: [settings : {}] [mappings : {apache.status.total_mbytes={mapping={total_mbytes={type=long}}, full_name=apache.status.total_mbytes}}]"
}
],
"type" : "mapper_parsing_exception",
"reason" : "Root mapping definition has unsupported parameters: [settings : {}] [mappings : {apache.status.total_mbytes={mapping={total_mbytes={type=long}}, full_name=apache.status.total_mbytes}}]"
},
"status" : 400
}
FYI
The following may shed some light
GET /metricbeat-*/_mapping/field/apache.status.total_kbytes
Returns
{
"metricbeat-7.9.2-2020.10.06-000001" : {
"mappings" : {
"apache.status.total_kbytes" : {
"full_name" : "apache.status.total_kbytes",
"mapping" : {
"total_kbytes" : {
"type" : "long"
}
}
}
}
},
"metricbeat-7.2.0-2020.10.05-000001" : {
"mappings" : {
"apache.status.total_kbytes" : {
"full_name" : "apache.status.total_kbytes",
"mapping" : {
"total_kbytes" : {
"type" : "long"
}
}
}
}
}
}
What am I missing? Is the _mapping command wrong?
Thanks in advance,
A working example:
Create new index
PUT /metricbeat-7.2.0
{
"settings": {},
"mappings": {
"properties": {
"apache.status.total_kbytes": {
"type": "long"
}
}
}
}
Then GET metricbeat-7.2.0/_mapping/field/apache.status.total_kbytes will result in (same as your example):
{
"metricbeat-7.2.0" : {
"mappings" : {
"apache.status.total_kbytes" : {
"full_name" : "apache.status.total_kbytes",
"mapping" : {
"total_kbytes" : {
"type" : "long"
}
}
}
}
}
}
Now if you want to add a new field to an existing mapping use the API this way:
Update an existing index
PUT /metricbeat-7.2.0/_mapping
{
"properties": {
"total_mbytes": {
"type": "long"
}
}
}
Then GET metricbeat-7.2.0/_mapping will show you the updated mapping:
{
"metricbeat-7.2.0" : {
"mappings" : {
"properties" : {
"apache" : {
"properties" : {
"status" : {
"properties" : {
"total_kbytes" : {
"type" : "long"
}
}
}
}
},
"total_mbytes" : {
"type" : "long"
}
}
}
}
}
Also, take a look at Put Mapping Api

mapper_parsing_exception in Elasticsearch(Reason: No type specified for field [X])

I wanted to provide explicit mapping to the fields in my document, So I defined a mapping for my index demo and It looks like this below:
PUT /demo
{
"mappings": {
"properties": {
"X" : {
"X" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"Sub_X" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
}
}
After running the query , I am getting error as :
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "No type specified for field [X]"
}
],
"type" : "mapper_parsing_exception",
"reason" : "Failed to parse mapping [_doc]: No type specified for field [X]",
"caused_by" : {
"type" : "mapper_parsing_exception",
"reason" : "No type specified for field [X]"
}
},
"status" : 400
}
The field X in json document looks like :
"X" : {
"X" : [
"a"
],
"Sub_X" : [
[
"b"
]
]
},
Please help me out with this elastic search mapper_parse_exception error.
What you have is called nested data type
You have X which in turn contains X and Sub_X.
Mapping:
{
"properties": {
"X": {
"type": "nested"
}
}
}
Data:
{
"X": {
"X": [
"a"
],
"Sub_X": [
[
"b"
]
]
}
}
Query:
{
"query": {
"nested": {
"path": "X",
"query": {
"bool": {
"must": [
{ "match": { "X.X": "a" }},
{ "match": { "X.Sub_X": "b" }}
]
}
}
}
}
}
It outputs the document.

How to load Nested json data into a single column in druid

I am trying to load nested json data in Apache druid:
Data-->
{
"a": "a_data",
"b": "b_data",
"c_blob_Column": {"aaaa"{"k":"sample"{"c":"sample2"}}}}
Spec -->
{ "type" : "kafka", "dataSchema" : { "dataSource" : "blob", "parser" : { "type" : "string", "parseSpec" : { "format" : "json", "dimensionsSpec" : { "dimensions" : [ "a", "b", "c_blob_Column"
]
},
"timestampSpec": {
"column": "timestamp",
"format": "iso"
}
}
},
"metricsSpec" : [],
"granularitySpec" : {
"type" : "uniform",
"segmentGranularity" : "DAY",
"queryGranularity" : "none",
"rollup" : false
}
},
"ioConfig" : {
"topic":"blob_topic",
"consumerProperties":{
"bootstrap.servers":"<local server>"
},
"appendToExisting" : false,
"useEarliestOffset": true,
"taskDuration": "PT15M"
},
"tuningConfig" : {
"type" : "kafka",
"maxRowsPerSegment" : 5000000,
"maxRowsInMemory" : 25000
}
}
Output columns-->
a,b,c_blob_Column,__time
I am able to load the data but the issue is in the column c_blob_Column the data is not coming as in json form data Could someone please help me to find how to load the json blob data?
you can use jq expression:
"flattenSpec": {
"fields": [
{
"type": "jq",
"name": "c_blob_Column",
"expr": ".c_blob_Column | tojson"
}
]
}

How can I access protected S3 files in a CFN script?

I am trying to retrieve a file in my cloudformation script. If I make the file publicly available, then it works fine. If the file is private, then the cfn script fails, but with a 404 error in /var/log/. Trying to retrieve the file via wget results in the appropriate 403 error.
How can I retrieve private files from S3?
My file clause looks like:
"files" : {
"/etc/httpd/conf/httpd.conf" : {
"source" : "https://s3.amazonaws.com/myConfigBucket/httpd.conf"
}
},
I added an authentication clause and appropriate parameter:
"Parameters" : {
"BucketRole" : {
"Description" : "S3 role for access to bucket",
"Type" : "String",
"Default" : "S3Access",
"ConstraintDescription" : "Must be a valid IAM Role"
}
}
"AWS::CloudFormation::Authentication": {
"default" : {
"type": "s3",
"buckets": [ "myConfigBucket" ],
"roleName": { "Ref" : "BucketRole" }
}
},
My IAM Role looks like:
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}
The solution is to add an IamInstanceProfile property to the instance creation:
"Parameters" : {
...
"RoleName" : {
"Description" : "IAM Role for access to S3",
"Type" : "String",
"Default" : "DefaultRoleName",
"ConstraintDescription" : "Must be a valid IAM Role"
}
},
"Resources" : {
"InstanceName" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"ImageId" : { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "64"] },
"InstanceType" : { "Ref" : "InstanceType" },
"SecurityGroups" : [ {"Ref" : "SecurityGroup"} ],
"IamInstanceProfile" : { "Ref" : "RoleName" },
"KeyName" : { "Ref" : "KeyName" }
}
},
...