Unable to log in to CDK created Amazon MQ (RabbitMQ) web console - rabbitmq

When creating a publicly accessible Amazon MQ instance (with RabbitMQ under the hood), I can easily log in to the web console.
However when creating an MQ instance using the same settings and credentials through CDK I can't log in to the web console. The only response from the RabbitMQ service is
{
"error": "not_authorised",
"reason": "Login failed"
}
The Cloudwatch logs indicate that the user was created, but also warn that the user tried to log in using invalid credentials:
2021-07-02 14:20:54.867 [info] <0.1474.0> Created user 'admin'
2021-07-02 14:20:55.587 [info] <0.1481.0> Successfully set user tags for user 'admin' to [administrator]
2021-07-02 14:20:56.295 [info] <0.1488.0> Successfully set permissions for 'admin' in virtual host '/' to '.*', '.*', '.*'
2021-07-02 14:26:14.529 [warning] <0.1639.0> HTTP access denied: user 'admin' - invalid credentials
The construction of the Broker looks like this:
private createMessageBroker(vpc: Vpc, stage: Stage) {
const password: Secret = new Secret(this, 'BrokerAdminPassword', {
generateSecretString: { excludePunctuation: true },
description: 'Password for the Message Broker User',
});
const user: CfnBroker.UserProperty = {
consoleAccess: true,
username: 'admin',
password: password.toString(),
};
new CfnBroker(this, 'TaskMessageBroker', {
autoMinorVersionUpgrade: true,
brokerName: 'MessageBroker',
deploymentMode: 'SINGLE_INSTANCE',
engineType: 'RABBITMQ',
engineVersion: '3.8.11',
hostInstanceType: 'mq.t3.micro',
publiclyAccessible: true,
users: [user],
logs: { general: true },
});
}

Try using the following instead when instantiating your UserProperty
const user: CfnBroker.UserProperty = {
consoleAccess: true,
username: 'admin',
password: password.secretValue.toString(),
}

Related

OIDC-react signtOut doesn't end Cognito session

My stack: oidc-react, Amazon Cognito
When I log out on the site and call auth.signOut();, the userManager signs out the user and redirects to the login page, but when you log in again by calling auth.signIn(); makes a request to Cognito with the token it has, but won't ask for credentials and logs the user in, I guess because the user still has a running session with the same token, if I am right. It only asks for login credentials after an hour because the session expires after 60minutes.
I want Congito to ask for credentials after signing out. I've tried passing the config these options after some research, but doesn't seem to be working:
revokeTokenTypes: (["access_token", "refresh_token"]),
revokeTokensOnSignout: true,
automaticSilentRenew: false,
monitorSession: true
This is the OIDC setup I pass to the provider:
const oidcConfig: AuthProviderProps = {
onSignIn: (user: User | null) => {
console.log("onSignIn");
},
onSignOut: (options: AuthProviderSignOutProps | undefined) => {
console.log('onSignOut');
},
autoSignIn: false,
loadUserInfo: true,
postLogoutRedirectUri: "localhost:3000/",
automaticSilentRenew: false,
authority: "https://" + process.env.REACT_APP_AWS_COGNITO_DOMAIN,
clientId: process.env.REACT_APP_AWS_COGNITO_CLIENT_ID,
redirectUri: window.location.origin,
responseType: 'code',
userManager: new UserManager({
authority: "https://" + process.env.REACT_APP_AWS_COGNITO_DOMAIN,
client_id: process.env.REACT_APP_AWS_COGNITO_CLIENT_ID!,
redirect_uri: window.location.origin,
revokeTokenTypes: (["access_token", "refresh_token"]),
revokeTokensOnSignout: true,
automaticSilentRenew: false,
monitorSession: true
})
};
AWS Cognito does not yet implement the RP Initiated Logout specification or return an end_session_endpoint from its OpenID Connect discovery endpoint. I expect this is your problem, since the library is probably implemented in terms of these standards.
Instead, AWS Cognito uses these parameters and a /logout endpoint. In my apps I have implemented Cognito logout by forming a URL like this, then redirecting to it by setting location.href to the URL value:
public buildLogoutUrl(): string {
const logoutReturnUri = encodeURIComponent(this._configuration.postLogoutRedirectUri);
const clientId = encodeURIComponent(this._configuration.clientId);
return `${this._configuration.logoutEndpoint}?client_id=${clientId}&logout_uri=${logoutReturnUri}`;
}
This will enable you to end the Cognito session and force the user to sign in again. It will also enable you to return to a specific location within your app, such as a /loggedout view.

How to Use Azure Managed Identity as Credential For Debugging Cosmos Database access inside of Blazer Server App?

Background:
As described in bullet #4 of my other post I'm trying to follow CDennig's example bicep script to grant my blazor (server) application access to both my key vault (to get the AAD client secret) and the azure cosmos db. CDennig's bicep script grants a managed identity (MI) access (RBAC) to the cosmos database. Unlike CDennig's use of a kubernetes's cluster, I'm using a App Service Webapp and assigning the MI as the user assigned principal of the webapp.
When I deploy to the Azure App Service as a web app, the deployment fails with no error messages... I strongly suspect the problem is that the managed identity (MI) is being denied access to my key vault in spite having granted it my bicep script.
Questions:
How can I run my blazor web app locally on my development machine using the MI as the credential to simulate the environment of running inside the App Serviced webapp and confirm my hypothesis that the key vault access is the problem?
If it is the problem, how would I fix it? See my bicep script below where I grant the MI access to the key vault.
2022 April 17 Sun Morning Update:
Problem has been partially fixed by granting both the system assigned and user assigned MI for the App Service Web App access to the key vault and I can now log in when running on Azure. It seems that Azure is ignoring my user assigned MI for access to the key vault and I suspect it is also ignoring the RBAC for the user assigned MI RBAC. When running locally on my development machine I can also write to the cosmos database because of the RBAC applied to my personal Azure account.
I would still like to know how to run under the user assigned MI when running locally on my development machine. Is this possible with one of the DefaultAzureCredential classes?
And of course, I would still like to know how to access cosmos database via RBAC (no passwords) when deployed to Azure.
2022 April 17 Sun Evening Update: Progress!
When I grant the RBAC access to the system assigned service principal for the Azure App Service Web App using powershell, it works and I can access the cosmos database in azure app service... Please help me fix my bicep script. When I try to fix my script by assigning RBAC to the user assigned service principal I get the error described here.
It seems that this could be done with DefaultAzureCredential I'm not clear on how to do this.
Here is my bicep script:
/*
* Begin commands to execute this file using Azure CLI with PowerShell
* $name='AADB2C_BlazorServerDemo'
* $rg="rg_$name"
* $loc='westus2'
* echo az.cmd group create --location $loc --resource-group $rg
* az.cmd group create --location $loc --resource-group $rg
* echo Set-AzDefault -ResourceGroupName $rg
* Set-AzDefault -ResourceGroupName $rg
* echo begin create deployment group
* az.cmd identity create --name umid-cosmosid --resource-group $rg --location $loc
* $MI_PRINID=$(az identity show -n umid-cosmosid -g $rg --query "principalId" -o tsv)
* write-output "principalId=${MI_PRINID}"
* az.cmd deployment group create --name $name --resource-group $rg --template-file deploy.bicep --parameters '#deploy.parameters.json' --parameters managedIdentityName=umid-cosmosid ownerId=$env:AZURE_OBJECTID --parameters principalId=$MI_PRINID
* Get-AzResource -ResourceGroupName $rg | ft
* echo end create deployment group
* End commands to execute this file using Azure CLI with Powershell
*
*/
#description('AAD Object ID of the developer so s/he can access key vault when running on development')
param ownerId string
#description('Principal ID of the managed identity')
param principalId string
var principals = [
principalId
ownerId
]
#description('The base name for resources')
param name string = uniqueString(resourceGroup().id)
#description('The location for resources')
param location string = resourceGroup().location
#description('Cosmos DB Configuration [{key:"", value:""}]')
param cosmosConfig object
#description('Azure AD B2C Configuration [{key:"", value:""}]')
param aadb2cConfig object
#description('Azure AD B2C App Registration client secret')
#secure()
param clientSecret string
#description('Dummy Azure AD B2C App CosmosConnectionString')
#secure()
param cosmosConnectionString string=newGuid()
#description('The web site hosting plan')
#allowed([
'F1'
'D1'
'B1'
'B2'
'B3'
'S1'
'S2'
'S3'
'P1'
'P2'
'P3'
'P4'
])
param sku string = 'F1'
#description('The App Configuration SKU. Only "standard" supports customer-managed keys from Key Vault')
#allowed([
'free'
'standard'
])
param configSku string = 'free'
resource config 'Microsoft.AppConfiguration/configurationStores#2020-06-01' = {
name: 'asc-${name}config'
location: location
sku: {
name: configSku
}
resource Aadb2cConfigValues 'keyValues#2020-07-01-preview' = [for item in items(aadb2cConfig): {
name: 'AzureAdB2C:${item.key}'
properties: {
value: item.value
}
}]
resource CosmosConfigValues 'keyValues#2020-07-01-preview' = [for item in items(cosmosConfig): {
name: 'CosmosConfig:${item.key}'
properties: {
value: item.value
}
}]
resource aadb2cClientSecret 'keyValues#2020-07-01-preview' = {
// Store secrets in Key Vault with a reference to them in App Configuration e.g., client secrets, connection strings, etc.
name: 'AzureAdB2C:ClientSecret'
properties: {
// Most often you will want to reference a secret without the version so the current value is always retrieved.
contentType: 'application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8'
value: '{"uri":"${kvaadb2cSecret.properties.secretUri}"}'
}
}
resource cosmosConnectionStringSecret 'keyValues#2020-07-01-preview' = {
// Store secrets in Key Vault with a reference to them in App Configuration e.g., client secrets, connection strings, etc.
name: 'CosmosConnectionStringSecret'
properties: {
// Most often you will want to reference a secret without the version so the current value is always retrieved.
contentType: 'application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8'
value: '{"uri":"${kvCosmosConnectionStringSecret.properties.secretUri}"}'
}
}
}
resource kv 'Microsoft.KeyVault/vaults#2019-09-01' = {
// Make sure the Key Vault name begins with a letter.
name: 'kv-${name}'
location: location
properties: {
sku: {
family: 'A'
name: 'standard'
}
tenantId: subscription().tenantId
accessPolicies: [
{
tenantId: subscription().tenantId
objectId: ownerId
permissions:{
secrets:[
'all'
]
}
}
{
tenantId: subscription().tenantId
objectId: principalId
permissions: {
// Secrets are referenced by and enumerated in App Configuration so 'list' is not necessary.
secrets: [
'get'
]
}
}
]
}
}
// Separate resource from parent to reference in configSecret resource.
resource kvaadb2cSecret 'Microsoft.KeyVault/vaults/secrets#2019-09-01' = {
name: '${kv.name}/AzureAdB2CClientSecret'
properties: {
value: clientSecret
}
}
resource kvCosmosConnectionStringSecret 'Microsoft.KeyVault/vaults/secrets#2019-09-01' = {
name: '${kv.name}/CosmosConnectionStringSecret'
properties: {
value: cosmosConnectionString
}
}
resource plan 'Microsoft.Web/serverfarms#2020-12-01' = {
name: '${name}plan'
location: location
sku: {
name: sku
}
kind: 'linux'
properties: {
reserved: true
}
}
#description('Specifies managed identity name')
param managedIdentityName string
resource msi 'Microsoft.ManagedIdentity/userAssignedIdentities#2018-11-30' existing = {
name: managedIdentityName
}
// https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.web/web-app-managed-identity-sql-db/main.bicep#L73
resource web 'Microsoft.Web/sites#2020-12-01' = {
name: '${name}web'
location: location
identity: {
type: 'UserAssigned'
userAssignedIdentities: {
'${msi.id}': {}
}
}
properties: {
httpsOnly: true
serverFarmId: plan.id
siteConfig: {
linuxFxVersion: 'DOTNETCORE|6'
connectionStrings: [
{
name: 'AppConfig'
connectionString: listKeys(config.id, config.apiVersion).value[0].connectionString
}
]
}
}
}
output appConfigConnectionString string = listKeys(config.id, config.apiVersion).value[0].connectionString
// output siteUrl string = 'https://${web.properties.defaultHostName}/'
output vaultUrl string = kv.properties.vaultUri
var dbName = 'rbacsample'
var containerName = 'data'
// Cosmos DB Account
resource cosmosDbAccount 'Microsoft.DocumentDB/databaseAccounts#2021-06-15' = {
name: 'cosmos-${uniqueString(resourceGroup().id)}'
location: location
kind: 'GlobalDocumentDB'
properties: {
consistencyPolicy: {
defaultConsistencyLevel: 'Session'
}
locations: [
{
locationName: location
failoverPriority: 0
}
]
capabilities: [
{
name: 'EnableServerless'
}
]
disableLocalAuth: false // switch to 'true', if you want to disable connection strings/keys
databaseAccountOfferType: 'Standard'
enableAutomaticFailover: true
publicNetworkAccess: 'Enabled'
}
}
// Cosmos DB
resource cosmosDbDatabase 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases#2021-06-15' = {
name: '${cosmosDbAccount.name}/${dbName}'
location: location
properties: {
resource: {
id: dbName
}
}
}
// Data Container
resource containerData 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers#2021-06-15' = {
name: '${cosmosDbDatabase.name}/${containerName}'
location: location
properties: {
resource: {
id: containerName
partitionKey: {
paths: [
'/partitionKey'
]
kind: 'Hash'
}
}
}
}
#batchSize(1)
module cosmosRole 'cosmosRole.bicep' = [for (princId, jj) in principals: {
name: 'cosmos-role-definition-and-assignment-${jj}'
params: {
cosmosDbAccountId: cosmosDbAccount.id
cosmosDbAccountName: cosmosDbAccount.name
principalId: princId
it: jj
}
}]
Here is the module to create the role assignments and definitions:
#description ('cosmosDbAccountId')
param cosmosDbAccountId string
#description ('cosmosDbAccountName')
param cosmosDbAccountName string
#description('iteration')
param it int
#description('Principal ID of the managed identity')
param principalId string
var roleDefId = guid('sql-role-definition-', principalId, cosmosDbAccountId)
var roleDefName = 'Custom Read/Write role-${it}'
var roleAssignId = guid(roleDefId, principalId, cosmosDbAccountId)
resource roleDefinition 'Microsoft.DocumentDB/databaseAccounts/sqlRoleDefinitions#2021-06-15' = {
name: '${cosmosDbAccountName}/${roleDefId}'
properties: {
roleName: roleDefName
type: 'CustomRole'
assignableScopes: [
cosmosDbAccountId
]
permissions: [
{
dataActions: [
'Microsoft.DocumentDB/databaseAccounts/readMetadata'
'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/*'
]
}
]
}
}
resource roleAssignment 'Microsoft.DocumentDB/databaseAccounts/sqlRoleAssignments#2021-06-15' = {
name: '${cosmosDbAccountName}/${roleAssignId}'
properties: {
roleDefinitionId: roleDefinition.id
principalId: principalId
scope: cosmosDbAccountId
}
}

IAM Authentication to RDS from Lambda (NodeJS)

I'm a bit lost as to what could be wrong here. I have been following this set of instructions to get my Lambdas to connect to RDS using IAM Authentication according to these instructions:
https://cloudonaut.io/passwordless-database-authentication-for-aws-lambda/
Now, I have checked that my RDS Cluster indeed has IAM Authentication enabled - I have created the user and granted all permissions necessary (done this a few times) and I have verified that the "Signer" step here does, indeed, generate a token that's supposedly valid.
Furthermore, the lambdas have the permissions for rds-db:connect and I even added RDS full access for the lambda role (altho I reckon this shouldn't be necessary).
Still, when I attempt to connect, I simply get this:
"errorType": "Error",
"errorMessage": "Access denied for user 'mydbuser'#'19.196.193.217' (using password: YES)",
"code": "ER_ACCESS_DENIED_ERROR",
"errno": 1045,
"sqlState": "28000",
"sqlMessage": "Access denied for user 'mydbuser'#'19.196.193.217' (using password: YES)",
I also noticed that the token I get back has some URL encoded bits so I tried URL decoding it, but that didn't lead anywhere.
I'm out of ideas as to where this could even be failing and the error messages are utterly unhelpful. The grants on the user in question are:
GRANT USAGE ON *.* TO 'mydbuser'#'%'
GRANT ALL PRIVILEGES ON `mydbuser`.* TO 'mydbuser'#'%'
So this hints at it not accepting the token as password. My connection code looks like this:
const mysql = require('mysql2');
...
signer.getAuthToken({
region: 'my+region',
hostname: process.env.ENDPOINT,
port: 3306,
username: 'mydbuser'
}, function(err, token) {
var connection = mysql.createConnection({
host: process.env.ENDPOINT,
port: 3306,
user: 'mydbuser',
password: token,
database: 'mydb',
ssl: { ca: fs.readFileSync(__dirname + '/rds-combined-ca-bundle.pem') },
authSwitchHandler: function (data, cb) {
if (data.pluginName === 'mysql_clear_password') {
cb(null, Buffer.from(token + '\0'));
}
}
});
connection.connect((res, err) => {
console.log(res, err);
});
connection.query(
`SELECT 1;`, ...
Also the full generated token does include the X-Amz-Security-Token so it hints at it being valid from what I've researched so far.
What else could even fail at this point ?

Infinispan java.lang.SecurityException: ISPN006017: Unauthorized 'PUT' operation

I am trying to put a value in Infinispan cache using Hotrod nodeJS client. The code runs fine if the server is installed locally. However, when I run the same code with Infinispan server hosted on docker container I get the following error
java.lang.SecurityException: ISPN006017: Unauthorized 'PUT' operation
try {
client = await infinispan.client({
port: 11222,
host: '127.0.0.1'
}, {
cacheName: 'testcache'
});
console.log(`Connected to cache`);
await client.put('test', 'hello 1');
await client.disconnect();
} catch (e) {
console.log(e);
await client.disconnect();
}
I have tried setting CORS Allow all option on the server as well
Need to provide custom config.yaml to docker with following configurations
endpoints:
hotrod:
auth: false
enabled: false
qop: auth
serverName: infinispan
Unfortunately the nodejs client doesn't support authentication yet. The issue to implement this is https://issues.redhat.com/projects/HRJS/issues/HRJS-36

"Execution failed" when setting up API Gateway and Fargate with AWS CDK

I am trying to setup AWS API Gateway to access a fargate container in a private VPC as described here. For this I am using AWS CDK as described below. But when I curl the endpoint after successful cdk deploy I get "Internal Server Error" as a response. I can't find any additional information. For some reason API GW can't reach the container.
So when I curl the endpoint like this:
curl - i https://xxx.execute-api.eu-central-1.amazonaws.com/prod/MyResource
... I get the following log output in cloud watch:
Extended Request Id: NpuEPFWHliAFm_w=
Verifying Usage Plan for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21. API Key: API Stage: ...
PI Key authorized because method 'ANY /MyResource/{proxy+}' does not require API Key. Request will not contribute to throttle or quota limits
Usage Plan check succeeded for API Key and API Stage ...
Starting execution for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21
HTTP Method: GET, Resource Path: /MyResource/test
Execution failed due to configuration error: There was an internal error while executing your request
CDK Code
First I create a network load balanced fargate service:
private setupService(): NetworkLoadBalancedFargateService {
const vpc = new Vpc(this, 'MyVpc');
const cluster = new Cluster(this, 'MyCluster', {
vpc: vpc,
});
cluster.connections.allowFromAnyIpv4(Port.tcp(5050));
const taskDefinition = new FargateTaskDefinition(this, 'MyTaskDefinition');
const container = taskDefinition.addContainer('MyContainer', {
image: ContainerImage.fromRegistry('vad1mo/hello-world-rest'),
});
container.addPortMappings({
containerPort: 5050,
hostPort: 5050,
});
const service = new NetworkLoadBalancedFargateService(this, 'MyFargateServie', {
cluster,
taskDefinition,
assignPublicIp: true,
});
service.service.connections.allowFromAnyIpv4(Port.tcp(5050));
return service;
}
Next I create the VpcLink and the API Gateway:
private setupApiGw(service: NetworkLoadBalancedFargateService) {
const api = new RestApi(this, `MyApi`, {
restApiName: `MyApi`,
deployOptions: {
loggingLevel: MethodLoggingLevel.INFO,
},
});
// setup api resource which forwards to container
const resource = api.root.addResource('MyResource');
resource.addProxy({
anyMethod: true,
defaultIntegration: new HttpIntegration('http://localhost.com:5050', {
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
}),
},
proxy: true,
}),
defaultMethodOptions: {
authorizationType: AuthorizationType.NONE,
},
});
resource.addMethod('ANY');
this.addCorsOptions(resource);
}
Anyone has a clue what is wrong with this config?
After hours of trying I finally figured out that the security groups do not seem to be updated correctly when setting up the VpcLink with CDK. Broadening the allowed connection with
service.service.connections.allowFromAnyIpv4(Port.allTraffic())
solved it. Still need to figure out which minimum set needs to be set instead of allTrafic()
Additionally I replaced localhost in the HttpIntegration by the endpoint of the load balancer like this:
resource.addMethod("ANY", new HttpIntegration(
'http://' + service.loadBalancer.loadBalancerDnsName,
{
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
})
},
}
))