Extracting custom objects from HttpContext - asp.net-core

Context
I am rewriting an ASP.NET Core application from being ran on lambda to run on an ECS Container. Lambda supports the claims injected from Cognito Authorizer out of the box, but Kestrel doesn't.
API requests are coming in through API Gateway, where a Cognito User Pool authorizer is validating the OAuth2 tokens and enriching the claims from the token to the httpContext.
Originally the app was running on lambda where the entry point was inheriting Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction, which extracts those claims and adds them to Request.HttpContext.User.Claims.
Kestrel of course doesn't support that and AWS ASPNET Cognito Identity Provider seems to be meant for performing the same things that the authorizer is doing.
Solution?
So I got the idea that maybe I can add some custom code to extract it. The HTTP request injected into lambda looks like this, so I expect it should be the same when it's proxied into ECS
{
"resource": "/{proxy+}",
"path": "/api/authtest",
"httpMethod": "GET",
"headers": {
<...>
},
"queryStringParameters": null,
"pathParameters": {
"proxy": "api/authtest"
},
"requestContext": {
"resourceId": "8gffya",
"authorizer": {
"cognito:groups": "Admin",
"phone_number_verified": "true",
"cognito:username": "normj",
"aud": "3mushfc8sgm8uoacvif5vhkt49",
"event_id": "75760f58-f984-11e7-8d4a-2389efc50d68",
"token_use": "id",
"auth_time": "1515973296",
"you_are_special": "true"
}
<...>
}
Is it possible, and how could I go about it to add all the key / value pairs from requestContext.authorizer to Request.HttpContext.User.Claims?

I found a different solution for this.
Instead of trying to modify the HttpContext I map the authorizer output to request headers in the API Gateway integration. Downside of this is that each claim needs to be hardcoded as it doesn't seem to be possible to iterate over them.
Example terraform
resource "aws_api_gateway_integration" "integration" {
rest_api_id = "${var.aws_apigateway-id}"
resource_id = "${aws_api_gateway_resource.proxyresource.id}"
http_method = "${aws_api_gateway_method.method.http_method}"
integration_http_method = "ANY"
type = "HTTP_PROXY"
uri = "http://${aws_lb.nlb.dns_name}/{proxy}"
connection_type = "VPC_LINK"
connection_id = "${aws_api_gateway_vpc_link.is_vpc_link.id}"
request_parameters = {
"integration.request.path.proxy" = "method.request.path.proxy"
"integration.request.header.Authorizer-ResourceId" = "context.authorizer.resourceId"
"integration.request.header.Authorizer-ResourceName" = "context.authorizer.resourceName"
"integration.request.header.Authorizer-Scopes" = "context.authorizer.scopes"
"integration.request.header.Authorizer-TokenType" = "context.authorizer.tokenType"
}
}

Related

AWS Cognito: How to get the user pool username form an identity ID of an identity pool?

We allow our users to upload data to a S3 bucket, which then triggers a Python lambda which again updates a DynamoDB entry based on the uploaded file. In the lambda, we struggle to get the username of the user who put the item. Since the Lambda is triggered by the put event from the S3 storage, we don't have the authorizer information available in the request context. The username is required as it needs to be part of the database record.
Here some more background: Every user should only have access to her own files, so we use this IAM policy (created with CDK):
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['s3:PutObject', 's3:GetObject'],
resources: [
bucket.arnForObjects(
'private/${cognito-identity.amazonaws.com:sub}/*'
),
],
})
Since the IAM policy validates the cognito-identity.amazonaws.com:sub field (which translates to the identity ID) we should be able to trust this value. This is the Python lambda and an example of a record we receive:
import json
import boto3
'''
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "my-region-1",
"eventTime": "2023-02-13T19:50:56.886Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS:XXXX:CognitoIdentityCredentials"
},
"requestParameters": {
"sourceIPAddress": "XXX"
},
"responseElements": {
"x-amz-request-id": "XXX",
"x-amz-id-2": "XX"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "XXX",
"bucket": {
"name": "XXX",
"ownerIdentity": {
"principalId": "XXX"
},
"arn": "arn:aws:s3:::XXX"
},
"object": {
"key": "private/my-region-1%00000000-0000-0000-0000-000000000/my-file",
"size": 123,
"eTag": "XXX",
"sequencer": "XXX"
}
}
}
]
}
'''
print('Loading function')
dynamodb = boto3.client('dynamodb')
cognito = boto3.client('cognito-idp')
cognito_id = boto3.client('cognito-identity')
print("Created clients")
def handler(event, context):
# context.identity returns None
print("Received event: " + json.dumps(event, indent=2))
for record in event['Records']:
time = record['eventTime']
key = record['s3']['object']['key']
identityId = key.split('/')[1]
# How to get the username from the identityId?
return "Done"
Things we tried:
Try to find an alternative to cognito-identity.amazonaws.com:sub which validates the username, but from the documentation there is no option for that
Encode the username in the bucket key, but this opens a security hole as then a client can pretend to have a different username
Make a lookup with https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/cognito-identity.html to find the username for an identity ID, but so far we haven't found anything there
Tried to follow How to get user attributes (username, email, etc.) using cognito identity id, but in the Lambda we don't have an ID or access token available
Getting cognito user pool username from cognito identity pool identityId, but since the Lambda is triggered by a S3 put event, we don't have an authorizer context
We could store the identity ID as custom attribute of every Cognito user (as suggested here How to map Cognito (federated) identity ID to Cognito user pool ID?), but before I do that I would like to be sure that there isn't a better option as I fear that the duplication of this information could lead to issues in the long run.

Attempting to subscribe to a Shopify Webhook w/AWS EventBridge produces error: "Address is an AWS ARN and includes api_client_id 'x' instead of 'y'"

I'm running this request through Postman. Some posts to the Shopify developer forum (e.g., this one) express without clear explanation that the request should be made within the Shopify app that would be subscribing to the Webhooks, but Postman seems to work, too.
In Postman . . .
Here's the endpoint:
https://{{shopifyDevelopmentStoreName}}.myshopify.com/admin/api/2022-07/graphql.json
Here's the GraphQL body:
mutation createWebhookSubscription($topic: WebhookSubscriptionTopic!, $webhookSubscription: EventBridgeWebhookSubscriptionInput!) {
eventBridgeWebhookSubscriptionCreate(
topic: $topic,
webhookSubscription: $webhookSubscription
) {
webhookSubscription {
id
}
userErrors {
message
}
}
}
Here's the payload being sent (notice the "client_id_x" value within the arn property):
{
"topic": "PRODUCTS_CREATE",
"webhookSubscription": {
"arn": "arn:aws:events:us-east-1::event-source/aws.partner/shopify.com/client_id_x/LovecraftEventBridgeSource",
"format": "JSON",
"includeFields": "id"
}
}
Here's the response I receive:
{
"data": {
"eventBridgeWebhookSubscriptionCreate": {
"webhookSubscription": null,
"userErrors": [
{
"message": "Address is invalid"
},
{
"message": "Address is an AWS ARN and includes api_client_id 'client_id_x' instead of 'client_id_y'"
}
]
}
},
"extensions": {
"cost": {
"requestedQueryCost": 10,
"actualQueryCost": 10,
"throttleStatus": {
"maximumAvailable": 1000.0,
"currentlyAvailable": 990,
"restoreRate": 50.0
}
}
}
}
What's entirely unclear is why Shopify is insisting upon validity of "client_id_y" when, in AWS, the value being displayed is undeniably 'client_id_x'. Extremely confusing. I don't even see what difference using the Shopify app would make except that it produces a client_id value that works counter to one's expectations and intuitions.
Does anyone know why the heck Shopify isn't just using the client_id value of the event bus created earlier in Amazon EventBridge?
Same happend to me and I was lucky to find a solution.
The error message is just missleading.
I replaced the API Access Token for the Shopify Rest API Request (X-Shopify-Access-Token)
with the one from the Shopify App holding the aws credentials.
admin/settings/apps/development -> app -> API credentials -> Admin API access token. (can only be seen after creation)
Then I could subscribe webhooks to the app via the Rest Interface.

BigCommerce StoreFront API SSO - Invalid login. Please attempt to log in again

Been at this for a few days. I am making a login form on my angular/nodejs app. The bc-api is able to verify the user/password. Now with that i need to allow the customer to enter the store with sso but the generated jwt is not working. My attempt below... I am looking for troubleshooting tips.
Generate JWT / sso_url
var jwt = require('jwt-simple');
function decode_utf8(s) {
return decodeURIComponent(escape(s));
}
function get_token(req, data) {
let uid = req.id;
let time = Math.round((new Date()).getTime() / 1000);
let payload = {
"iss": app.clientId,
// "iat": Math.floor(new Date() / 1000),
"iat": time,
"jti": uid+"-"+time,
"operation": "customer_login",
"store_hash": app.storeHash,
"customer_id": uid,
"redirect_to": app.entry_url
}
let token = jwt.encode(payload, app.secret, 'HS512');
token = decode_utf8(token);
let sso_url = {sso_url: `${app.entry_url}/login/token/${token}`}
return sso_url
}
payload resolves to
{
"iss": "hm6ntr11uikz****l3j2o662eurac9w",
"iat": 1529512418,
"jti": "1-1529512418",
"operation": "customer_login",
"store_hash": "2bihpr2wvz",
"customer_id": "1",
"redirect_to": "https://store-2bihpr2wvz.mybigcommerce.com"
}
generated sso_url
https://store-2bihpr2wvz.mybigcommerce.com/login/token/eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJpc3MiOiJobTZudHIxMXVpa3oxMXpkbDNqMm82NjJldXJhYzl3IiwiaWF0IjoxNTI5NTEyNDE4LCJqdGkiOiIxLTE1Mjk1MTI0MTgiLCJvcGVyYXRpb24iOiJjdXN0b21lcl9sb2dpbiIsInN0b3JlX2hhc2giOiIyYmlocHIyd3Z6IiwiY3VzdG9tZXJfaWQiOiIxIiwicmVkaXJlY3RfdG8iOiJodHRwczovL3N0b3JlLTJiaWhwcjJ3dnoubXliaWdjb21tZXJjZS5jb20ifQ.vaeVTw4NjvX6AAPChgdXgMhm9b1W5B2QEwi4sJ6jz9KsKalqTqleijjRKs8jZP8jdQxC4ofYX5W0wYPMTquxQQ
result
about my env
I am using nodejs express... my bc app's secret & clientId are being used above and they work for several other bc-api tasks. My app is installed and authenticated on bc admin. The app being used to do the above is running on localhost but i also tried online https (same result).
I am thinking that there might be some incorrect configuration in my stores admin but havent found anything to change.
I decoded your JWT on jwt.io and I get this:
Header:
{
"typ": "JWT",
"alg": "HS512"
}
There's at least one problem here
BC requires HS256 as the algorithm according to docs
https://developer.bigcommerce.com/api/v3/storefront.html#/introduction/customer-login-api
Body:
{
"iss": "hm6ntr11uikz11zdl3j2o662eurac9w",
"iat": 1529512418,
"jti": "1-1529512418",
"operation": "customer_login",
"store_hash": "2bihpr2wvz",
"customer_id": "1",
"redirect_to": "https://store-2bihpr2wvz.mybigcommerce.com"
}
Problems here:
JTI should be a totally random string, using something containing the time could result in duplicates which will be rejected. Try using a UUID
Customer ID should be an int, not a string
The redirect_to parameter accepts relative URLs only. So try "redirect_to": "/" if your goal is to redirect to the home page.
Another potential problem is system time - if your JWT was created in the "future" according to BC's server time, your JWT also won't work. You can use the /v2/time endpoint response to specify the IAT, or to keep your own clock in sync.

How to tune flasgger in order to use basic authentication in sending requests

I try to use flasgger for my simple RESTful API. The API requireds the authentication/authorization and uses the basic authentication to perform any query.
There is really good documentation about Basic Authentication in swagger.io
But how can those settings be implemented in flassger? I've tried to used template to set securityDefinitions into swaggler, but the attempt hasn't been successful yet.
UPD. Probably the issue hasn't been resolved yet. Flasgger doesnt support basic auth #103
I've resolved the issue of authentication adding the next code:
swagger_template = {
# Other settings
'securityDefinitions': {
'basicAuth': {
'type': 'basic'
}
},
# Other settings
}
app = Flask(__name__)
Swagger(app, swagger_config, template=swagger_template)
Thanks for Dimaf's answer, helped me a lot. Just want to update the new version, in case someone else run into the same problem.
For Swagger 3.0, the config has been updated to the following (this example is for bearer authorization):
swagger_template = {
"components": {
"securitySchemes": {
"BearerAuth": {
"type": "http",
"scheme": "bearer",
"bearerFormat": "JWT",
"in": "header",
}
}
}
}

New-StreamAnalyticsJob cannot create Operations Monitoring Input for an IOT Hub

We have a Stream Analytics job that has an Input mapping to an IOT Hub Operations Monitoring endpoint. We originally defined our job on the Azure Portal. It works fine when so created / updated.
We use the job logic in multiple "Azure environments" and are now keeping it in source control. We used the Visual Studio Stream Analytics Project type to manage the source code.
We are using the New-StreamAnalyticsJob Powershell command to deploy our job into different environments.
Each time we deploy, however, the resulting Stream Analytics Job's Input points to the Messaging endpoint of our IOT Hub instead of the Operations Monitoring endpoint.
Is there something we can enter into the input's JSON file to express the endpoint type? Here is the Input content of our JSON input to the cmdlet:
"Inputs": [{
"Name": "IOT-Hub-Monitoring-By-Consumer-Group",
"Properties": {
"DataSource": {
"Properties": {
"ConsumerGroupName": "theConsumerGroup",
"IotHubNamespace": "theIotNamespace",
"SharedAccessPolicyKey": null,
"SharedAccessPolicyName": "iothubowner"
},
"Type": "Microsoft.Devices/IotHubs"
},
"Serialization": {
"Properties": {
"Encoding": "UTF8",
"Format": "LineSeparated"
},
"Type": "Json"
},
"Type": "Stream"
}
},
{
"Name": "IOT-Hub-Messaging-By-Consumer-Group",
"Properties": {
"DataSource": {
"Properties": {
"ConsumerGroupName": "anotherConsumerGroup",
"IotHubNamespace": "theIotNamespace",
"SharedAccessPolicyKey": null,
"SharedAccessPolicyName": "iothubowner"
},
"Type": "Microsoft.Devices/IotHubs"
},
"Serialization": {
"Properties": {
"Encoding": "UTF8",
"Format": "LineSeparated"
},
"Type": "Json"
},
"Type": "Stream"
}
}
]
Is there an endpoint element within the IotHubProperties that we're not expressing? Is it documented somewhere?
I notice that the Azure Portal calls a different endpoint than is indicated here: https://learn.microsoft.com/en-us/rest/api/streamanalytics/stream-analytics-definition
It uses endpoints under https://main.streamanalytics.ext.azure.com/api. e.g.
GET /api/Jobs/GetStreamingJob?subscriptionId={guid}&resourceGroupName=MyRG&jobName=MyJobName
You'll notice in the results JSON:
{
"properties": {
"inputs": {
{
"properties": {
"datasource": {
"inputIotHubSource": {
"iotHubNamespace":"HeliosIOTHubDev",
"sharedAccessPolicyName":"iothubowner",
"sharedAccessPolicyKey":null,
---> "endpoint":"messages/events", <---
"consumerGroupName":"devicehealthmonitoring"
}
For operations monitoring you will see "endpoint":"messages/operationsMonitoringEvents"
They seem to implement Save for Inputs as PATCH /api/Inputs/PatchInput?... which takes a similarly constructed JSON with the same 2 values for endpoint.
Are you able to use that endpoint somehow? i.e. call New-AzureRmStreamAnalyticsJob as you normally would then Invoke-WebRequest -Method Patch -Uri ...
--Edit--
The Invoke-WebRequest was a no-go -- far too much authentication to try to replicate/emulate.
A better option is to go through this tutorial to create a console application and set the endpoint after deploying using the Powershell scripts.
Something like this should work (albeit with absolutely no error/null checks):
string tenantId = "..."; //Tenant Id Guid
string subscriptionId = "..."; //Subcription Id Guid
string rgName = "..."; //Name of Resource Group
string jobName = "..."; //Name of Stream Analytics Job
string inputName = "..."; //Name-of-Input-requiring-operations-monitoring
string accesskey = "..."; //Shared Access Key for the IoT Hub
var login = new ServicePrincipalLoginInformation();
login.ClientId = "..."; //Client / Application Id for AD Service Principal (from tutorial)
login.ClientSecret = "..."; //Password for AD Service Principal (from tutorial)
var environment = new AzureEnvironment
{
AuthenticationEndpoint = "https://login.windows.net/",
GraphEndpoint = "https://graph.windows.net/",
ManagementEnpoint = "https://management.core.windows.net/",
ResourceManagerEndpoint = "https://management.azure.com/",
};
var credentials = new AzureCredentials(login, tenantId, environment)
.WithDefaultSubscription(subscriptionId);
var azure = Azure
.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
.Authenticate(credentials)
.WithDefaultSubscription();
var client = new StreamAnalyticsManagementClient(credentials);
client.SubscriptionId = azure.SubscriptionId;
var job = client.StreamingJobs.List(expand: "inputs").Where(j => j.Name == jobName).FirstOrDefault();
var input = job.Inputs.Where(i => i.Name == inputName).FirstOrDefault();
var props = input.Properties as StreamInputProperties;
var ds = props.Datasource as IoTHubStreamInputDataSource;
ds.Endpoint = "messages/operationsMonitoringEvents";
ds.SharedAccessPolicyKey = accesskey;
client.Inputs.CreateOrReplace(input, rgName, jobName, inputName);
The suggestion from #DaveMontgomery was a good one but turned out to be not needed.
A simple CMDLET upgrade addressed the issue.
The root issue turned out to be that the Azure Powershell Cmdlets, up to and including version 4.1.x were using an older version of the Microsoft.Azure.Management.StreamAnalytics assembly, namely 1.0. Version 2.0 of Microsoft.Azure.Management.StreamAnalyticscame out some months ago and that release included, as I understand, adding an endpoint element to the Inputs JSON structure.
The new CMDLETs release is documented here: https://github.com/Azure/azure-powershell/releases/tag/v4.2.0-July2017. The commits for the release included https://github.com/Azure/azure-powershell/commit/0c00632aa8f767e58077e966c04bb6fc505da1ef, which upgrades to Microsoft.Azure.Management.StreamAnalytics v2.0.
Note that this was a beaking change, in that the JSON changed from PascalCase to camelCase.
With this change in hand we can add an endpoint element to the Properties / DataSource /Properties IOT input, and the as-deployed Stream Analytics Jobs contains an IOT Input properly sewn to the operationsMonitoring endpoint.