cross account SNS topic publish - amazon-cloudwatch

I have an SNS topic in account A that takes an error and sends to pager duty
I have another account - Account B that has several services in it, fargate, SQS etc but I have several cloudwatch alarms / actions setup to publish alerts to this SNS topic in account A
I get this error on Account B (the one trying to access the cross account service)
Failed to execute action
arn:aws:ACCOUNT-A:sns-topic.
Received error: "Resource:
arn:aws:cloudwatch:ACCOUNT-B
is not authorized to perform: SNS:Publish on resource:
ACCOUNT-A:sns"
here is my AWS CDK code for Account A
const accountATopic= new sns.Topic(this, 'accountATopic', {
displayName: 'accountATopic'
});
accountATopic.addSubscription(new snsSubscriptions.UrlSubscription('Externalurl'));
accountATopic.grantPublish(new iam.AccountPrincipal('ACCOUNTB'));
and then in ACCOUNT B (not showing the alarms)
const ACCOUNTBTopic = sns.Topic.fromTopicArn(this, 'ACCOUNTBTopic ', 'ACCOUNT-A-ARN');
const action = new cloudwatchActions.SnsAction(ACCOUNTBTopic );
ACCOUNTBTopic .addToResourcePolicy(new PolicyStatement({
resources: ['ACCOUNT-A-ARN'],
actions: ['SNS:Publish'],
effect: Effect.ALLOW,
}))

For anyone that comes across this in the future - here is what I did to get it working:
const accountATopic= new sns.Topic(this, 'accountATopic', {
displayName: 'accountATopic'
});
accountATopic.addSubscription(new snsSubscriptions.UrlSubscription('Externalurl'));
let snsPolicy = new PolicyStatement({effect:Effect.ALLOW,
resources:[accountATopic.topicArn],
actions:["SNS:Publish"],
principals:[
new AccountPrincipal('ACCOUNTB_ID'),
]
})
//or optionally:
//snsPolicy.addAnyPrincipal()
accountATopic.addToResourcePolicy(snsPolicy)
The grant publish method does not seem to allow it to work, also according to AWS documentation on cross account SNS / CloudWatch they suggested adding the org id for a condition - this did not work for me and had to remove it

Related

In Fargate container why can I CRUD S3 but can't create a presigned post

I'm using node in a docker container and locally I use my IAM keys for both creating, reading and deleting files to an S3 bucket as well as creating pre-signed posts. When up on a Fargate container, I create a taskRole and attach a policy which gives it full access to S3.
taskRole.attachInlinePolicy(
new iam.Policy(this, `${clientPrefix}-task-policy`, {
statements: [
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['S3:*'],
resources: ['*'],
}),
],
})
);
With that role, I can create, read and delete files with no issues from the API. When the API tries to create a pre-signed post however, I get the error:
Error: Unable to create a POST object policy without a bucket, region, and credentials
It seems super strange to me that I can run the other operations, but it fails with the presignedPOST, especially since my S3 actions are all allowed.
const post: aws.S3.PresignedPost = await s3.createPresignedPost({
Bucket: bucket,
Fields: { key },
Expires: 60,
Conditions: [['content-length-range', 0, 5242880]],
});
Here is the code I use. I am logging the bucket and key so I'm positive that they are valid values. One thought I had was when running locally, I will run aws.configure to set my keys but in Fargate I purposefully omit that. I thought that it was getting the right keys since the other s3 operations work without fail. Am I approaching this right?
When using IAM role credentials with AWS sdk, you must either use the asynchronous (callback) version of createPresignedPost or guarantee that your credentials have been resolved before calling the await version of this method.
Something like this will work with IAM based credentials:
const s3 = new AWS.S3()
const _presign = params => {
return new Promise((res, rej) => {
s3.createPresignedPost(params, (err, data) => {
if (err) return rej(err)
return res(data)
})
})
}
// await _presign(...) <- works
// await s3.createPresignedPost(...) <- won't work
Refer: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property

CDK deployment API Gateway - CloudWatch Logs role ARN must be set in account settings to enable logging

If you're getting the following error when you're trying to deploy an API Gateway (in particular the Stage), you'll need to ensure you have a CloudWatch ern set up against your Account.
Blah_V1Stage (V1Stage) CloudWatch Logs role ARN must be set in account settings to enable logging (Service: AmazonApiGateway; Status Code: 400; Error Code: BadRequestException; Request ID: a855c5c5-b64b-4b22-85e8-703909b4c850)
const cloudWatchRole = new iam.Role(this, this.prefix + "_cloudwatchrole",
{
assumedBy: new iam.CompositePrincipal(new iam.ServicePrincipal("apigateway.amazonaws.com")),
roleName: this.prefix + "_cloudwatchrole"
});
cloudWatchRole.addManagedPolicy(
iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AmazonAPIGatewayPushToCloudWatchLogs'))
const account = new apigateway.CfnAccount(this, "account",
{
cloudWatchRoleArn: cloudWatchRole.roleArn
});
Just as an update, if you are using the RestApi construct, you now just need to set cloudWatchRole: true in the Construct Props and CDK will do the rest.

Cognito unable to signup users that have unconfirmed status already

A Cognito User Pool is configured for the users to use their "email address" to sign up and sign in.
If a user signs up with the email of someone else then that email will get stuck in UNCONFIRMED state and the owner will not be able to use it appropriately.
Having said that let me provide an example with the following scenario:
User signs in with an email address the user doesn't own, let's say it is someone#mail.com. In this step (registration form) some more data is sent like organization name, and user full name.
Verification code is sent to the email
Now the user that owns someone#email.com wants to create an account (maybe some days in the future), so he goes and fills the registration form but an error is thrown by cognito {"__type":"UsernameExistsException","message":"An account with the given email already exists."}
Thinks to consider:
* If the email already exists but is in unconfirmed state then provide the user the option to resend the link. This option is not optimal because additional data might be already in the user profile as the 1st step exemplifies.
* A custom lambda can be done to delete the unconfirmed user before signup or as a maintenance process every day, but I am not sure if this is the best approach.
There is also this configuration under Policies in cognito consol: "How quickly should user accounts created by administrators expire if not used?", but as he name implies this setting will only apply to users if they are invited by admins.
Is there a proper solution for this predicament?
Amazon Cognito has provided pre-signup triggers for these functionality and auto signup also.Your thought is the same way as i have implemented that according to the cognito documentations.
Here I am using the amplify/cli which is the toolchain for my development purpose hence the lambda function used in the trigger is as below:
`
"use strict";
console.log("Loading function");
var AWS = require("aws-sdk"),
uuid = require("uuid");
var cognitoIdentityServiceProvider = new AWS.CognitoIdentityServiceProvider();
exports.handler = (event, context, callback) => {
const modifiedEvent = event;
// check that we're acting on the right trigger
if (event.triggerSource === "PreSignUp_SignUp") {
var params = {
UserPoolId: event.userPoolId,
Username: event.userName
};
cognitoIdentityServiceProvider.adminGetUser(params, function(err, data) {
if (err) {
console.log(err, err.stack);
} // an error occurred
else {
console.log("cognito service", data);
if (data.UserStatus == "UNCONFIRMED") {
cognitoIdentityServiceProvider.adminDeleteUser(params, function(
err,
data
) {
if (err) console.log(err, err.stack);
// an error occurred
else console.log("Unconfirmed user delete successful ");
// successful response
});
}
// successful response
}
});
return;
}
// Throw an error if invoked from the wrong trigger
callback('Misconfigured Cognito Trigger '+ event.triggerSource);
};
`
this will actually check and delete if the status is UNCONFIRMED using the aws-sdk methods adminGetUser and adminDeleteUser
hope this will help ;)
I got around this by setting ForceAliasCreation=True. This would allow the real email owner to confirm their account. The draw back is that you end up with 2 users. One CONFIRMED user and another UNCONFIRMED user.
To clean this up, I have a lambda function that calls list-users with filter for unconfirmed user and delete the accounts which were created before a certain period. This function is triggered daily by CloudWatch.
change to confirm from unconfirm:
aws cognito-idp admin-confirm-sign-up \
--user-pool-id %aws_user_pools_web_client_id% \
--username %email_address%

Botkit slackbot error "Could not load team while processing webhook"

I have created a simple express server and added a /slack/receive route to handle webhook requests from the Slack events API:
// routes.js (which is used by my app defined in server.js)
...
let slack = require('./controllers/slack');
router.post('/slack/receive', slack.receive);
...
I then use Botkit to create a simple Slack application:
// controllers/slack.js
'use strict';
const logger = require('../config/winston');
// initialise firebase storage for botkit
const admin = require('firebase-admin');
var serviceAccount = require('../config/firebase.json');
admin.initializeApp({
credential: admin.credential.cert(serviceAccount)
});
var db = admin.firestore();
db.settings({
timestampsInSnapshots: true
})
// initialise botkit for slack
const botkit = require('botkit');
const controller = botkit.slackbot({
storage: require('botkit-storage-firestore')({ database: db }),
clientId: process.env.SLACK_CLIENT_ID,
clientSecret: process.env.SLACK_CLIENT_SECRET,
clientSigningSecret: process.env.SLACK_SIGNING_SECRET,
redirectUri: process.env.SLACK_REDIRECT,
disable_startup_messages: true,
send_via_rtm: false,
debug: true,
scopes: ['bot', 'chat:write:bot'],
})
controller.hears('Hello', 'direct_mention,direct_message', (bot, message) => {
logger.info(message);
bot.reply(message, 'I heard a message!');
})
exports.receive = (req, res, next) => {
res.sendStatus(200);
logger.debug(req.body);
controller.handleWebhookPayload(req, res);
};
The server initialises correctly, but as soon as the slack webhook receives a request the following error happens:
Could not load team while processing webhook: Error: could not find team T5VDRMWKX
at E:\Documents\upper-revolutions\node_modules\botkit\lib\SlackBot.js:169:24
at firebaseRef.doc.get.then.catch.err (E:\Documents\upper-revolutions\node_modules\botkit-storage-firestore\src\index.js:86:13)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:118:7)
So far I have found that:
Having/not having storage in the botkit slackbot makes no difference
The error happens within the handleWebhookPayload method as code within controller.hears() does not get executed
This error occurs because botkit needs some form of storage where it can store all the teams (channels and users too) and retrive it later on.
So, When your method handleWebhookPayload gets executed it calls another method called
findAppropriateTeam that will query for the specified team record in the storage provided by you (It might be mongoDB or a JSON file or other). The error is saying that you do not have any record in the storage with the id provided.
So this might implicate two things:
You did not provide a storage for botkit to work
You did not save the team id in the storage
The solution to the first problem is quite simple. You just need to install mongodb in your machine and then pass to botkit the MONGO_URL.
NOTE: I see that you are using the botkit simple storage and this might be the problem since I also have experieced some troubles with this kind of storage not saving records.
const controller = botkit.slackbot({
storage: 'mongodb//localhost:27017:/yourdb',
})
//OR
const controller = botkit.slackbot({
storage: process.env.MONGO_URL,
})
The possible solution to the second problem:
I will assume you are using botkit locally, so you must be using some tunneling like ngrok or localtunnel. In that case make sure:
You provided the redirect URL to Slack (Ex, https://your_url/oauth)
You accessed the https://your_url/login page
Botkit saves your team id on the provided storage when you access the /login route and authorizes the app. So if you skipped that part then botkit won't save your team id and therfore will throw an error when you receive events later on.
Check this like [https://github.com/howdyai/botkit/issues/938] for discutions on the topic
I hope this helps!

Why Icinga2 telegram notification fails in specific services?

I have created custom telegram notification very similar to email notifications. The problem is that it works for hosts and most of the services but not for all of them.
I do not post the *.sh files in scripts folder as it works!
In constants.conf I have added the bot token:
const TelegramBotToken = "MyTelegramToken"
I wanted to manage telegram channels or chat ids in users file, so I have users/user-my-username.conf as below:
object User "my-username" {
import "generic-user"
display_name = "My Username"
groups = ["faxadmins"]
email = "my-username#domain.com"
vars.telegram_chat_id = "#my_channel"
}
In templates/templates.conf I have added the below code:
template Host "generic-host-domain" {
import "generic-host"
vars.notification.mail.groups = ["domainadmins"]
vars.notification["telegram"] = {
users = [ "my-username" ]
}
}
template Service "generic-service-fax" {
import "generic-service"
vars.notification["telegram"] = {
users = [ "my-username" ]
}
}
And in notifications I have:
template Notification "telegram-host-notification" {
command = "telegram-host-notification"
period = "24x7"
}
template Notification "telegram-service-notification" {
command = "telegram-service-notification"
period = "24x7"
}
apply Notification "telegram-notification" to Host {
import "telegram-host-notification"
user_groups = host.vars.notification.telegram.groups
users = host.vars.notification.telegram.users
assign where host.vars.notification.telegram
}
apply Notification "telegram-notification" to Service {
import "telegram-service-notification"
user_groups = host.vars.notification.telegram.groups
users = host.vars.notification.telegram.users
assign where host.vars.notification.telegram
}
This is all I have. As I have said before it works for some services and does not work for other services. I do not have any configuration in service or host files for telegram notification.
To test I use Icinga web2. Going to a specific service in a host and send custom notification. When I send a custom notification I check the log file to see if there is any error and it says completed:
[2017-01-01 11:48:38 +0000] information/Notification: Sending reminder 'Problem' notification 'host-***!serviceName!telegram-notification for user 'my-username'
[2017-01-01 11:48:38 +0000] information/Notification: Completed sending 'Problem' notification 'host-***!serviceName!telegram-notification' for checkable 'host-***!serviceName' and user 'my-username'.
I should note that email is sent as expected. There is just a problem in telegram notifications for 2 services out of 12.
Any idea what would be the culprit? What is the problem here? Does return of scripts (commands) affect this behaviour?
There is no Telegram config in any service whatsoever.
Some telegram commands may fail due to markdown parser.
I've encountered this problem:
If service name has one underscore ('_'), then parser will complain about not closed markdown tag and message will not be sent