AWS Cognito User Migration - Exception during user migration - amazon-cognito

I have created userpool and trying to migrate user from RDS which invokes lambda function that returns the updated event object. but its not working for me.
I have followed as provided solution by removing below 2 fields, still not working .. :(
"desiredDeliveryMediums": "EMAIL",
"forceAliasCreation": "false"
Here is the response object that am sending from lambda. still facing same issue - Exception during user migration
Please let me know what am missing here. Thanks in advance
def lambda_handler(event, context):
print event
event["response"] = {
"userAttributes": {
"email": event["userName"],
"email_verified": "true",
},
"finalUserStatus": "CONFIRMED",
"messageAction": "SUPPRESS",
"desiredDeliveryMediums": "EMAIL",
"forceAliasCreation": "false"
}
print event
return event

I was having this problem, and I overcame it by increasing the memory allocated to the lambda from the default 128MB to 1024MB. I am using cdk to deploy, so I did this in the lamdba creation:
const nodeUserMigration = new NodejsFunction(this, 'myLambdaName', {
entry: path.join(
__dirname,
'userMigration.ts'
),
runtime: Runtime.NODEJS_18_X,
timeout: Duration.minutes(5),
memorySize: 1024, // This is what I added to overcome the `UserNotFoundException: Exception migrating user in app client (redactedClientId)`
environment: {
// redacted environment variables
},
});

Instead of
return event
You need
context.succeed(event)
It is probably possible to use return event directly; however, there would be other properties required to get Cognito to recognize it (things such as isBase64Encoded) and I don't know what they might be. Neither does Amazon have any documentation on them.
Oh, and desiredDeliveryMediums should be an array of strings.

Related

Update BigQuery scheduled query with notificationPubsubTopic fails

I am using the DataServiceTransferClient API/SDK for Node to create scheduled queries in BigQuery with a notificationPubsubTopic. Creating them works fine, no issues. Updating them results in an error:
INVALID_ARGUMENT: notificationPubsubTopic cannot be updated.
How I'm calling it:
const config = {
transferConfig: {
/* other config options */
notificationPubsubTopic: "projects/engineering/topics/test"
},
updateMask: {
paths: [
"params.query",
"params.write_disposition",
"params.destination_table_name_template",
"schedule",
"notificationPubsubTopic"
],
},
}
dataTransferClient.updateTransferConfig(config)
Some other info:
The topics I've tested with do exist. I can update the scheduled query in the UI to these other topic with no issue.
Fails even when re-using the already associated topic.
Updates without notificationPubsubTopic succeed. By this I specifically mean I am not passing the notificationPubsubTopic property and have removed it from the updateMask.
The updateMask property needed to be turned into snakecase.
updateMask: {
paths: [
"params.query",
"params.write_disposition",
"params.destination_table_name_template",
"schedule",
"notification_pubsub_topic" // <--- here
],
},
The documentation even shows an example of using camelCasing
https://cloud.google.com/bigquery-transfer/docs/reference/datatransfer/rest/v1/projects.locations.transferConfigs/patch#body.QUERY_PARAMETERS.update_mask

AWS Cognito Respond to New_Password_Required challenge returns "Cannot modify an already provided email"

An app that has been working successfully for a couple years has started throwing the following error whenever trying to respond to the NEW_PASSWORD_REQUIRED challenge with AWS Cognito:
{"__type":"NotAuthorizedException","message":"Cannot modify an already provided email"}
I'm sending the below, which all seems to match the docs.
{
"ChallengeName": "NEW_PASSWORD_REQUIRED",
"ClientId": <client_id>,
"ChallengeResponses": {
"userAttributes.email": "test#example.com",
"NEW_PASSWORD": "testP#55w0rd",
"USERNAME": "testfake"
},
"Session": <session_id>
}
Nothing has changed on the front end; is there a configuration change we might have done on the Cognito/AWS side that might cause this error?
I started getting the same error recently. I'm following Use case 23 Authenticate a user and set new password for a user. After some investigation, I found that it is the email attribute in userAttributes that's causing completeNewPasswordChallenge to throw the error. The userAttributes I get from authenticateUser used to be an empty object {}, but it now looks like this:
{ email_verified: 'true', email: 'test#example.com' }
I had to delete the email attribute (as well as the email_verified attribute as shown in the example code in Use case 23) before using the userAttribute for a completeNewPasswordChallenge. So my code is now like this:
cognitoUser.authenticateUser(authenticationDetails, {
...
newPasswordRequired: function(userAttributes, requiredAttributes) {
// the api doesn't accept this field back
delete userAttributes.email_verified;
delete userAttributes.email; // <--- add this line
// store userAttributes on global variable
sessionUserAttributes = userAttributes;
}
});
// ... handle new password flow on your app
handleNewPassword(newPassword) {
cognitoUser.completeNewPasswordChallenge(newPassword, sessionUserAttributes);
}
I guess aws changed their api recently, but I haven't found any doc about this change. Even though the value of the email attribute is the same as the actual email of the user, it throws the Cannot modify an already provided email error if you include it in the request. Deleting it solves the issue.

Google Assistant - Account linking with Google Sign-In

I have an Express app which supports Google authentication and authorization via passport. I have begun integrating it with Google Assistant and things were going quite well but I am having trouble with the account linking as described at https://developers.google.com/actions/identity/google-sign-in#start_the_authentication_flow
Using the method in the docs at https://codelabs.developers.google.com/codelabs/actions-2/#4 I was able to get user details but when I try to modify to support
app.intent('Start Signin', conv => {
conv.ask(new SignIn('To get your account details'))
})
and
app.intent('Get Signin', (conv, params, signin) => { ...}
the dialogflow always falls back to my default fallback intent and I get an error in Express console
Error: Dialogflow IntentHandler not found for intent: Default Fallback Intent
My dialogflow intent is set to use webhook and other intents work fine (until I add these sign-in intents!)
Reading this thread Dialogflow IntentHandler not found for intent: myIntent (Dialogflow V2) it was suggested that the intent name rather than the action name is used so I check my Actions on Google simulator and the request contains:
"inputs": [
{
"intent": "actions.intent.SIGN_IN",
"rawInputs": [
{
"inputType": "KEYBOARD"
}
],
"arguments": [
{
"name": "SIGN_IN",
"extension": {
"#type": "type.googleapis.com/google.actions.v2.SignInValue",
"status": "OK"
}
}
]
}
],
so I tried updating my Dialogflow intent name to actions.intent.SIGN_IN and modifying the intent name in my Express app accordingly but it doesn't make any difference.
The simulator response includes:
"responseMetadata": {
"status": {
"code": 14,
"message": "Webhook error (206)"
},
but I'm not sure if that is just because for some reason the intent names are not matching up. Any help much appreciated!
As you speculate in the comments, the issue is that your "Get Signin" Intent isn't registered to get the event indicating that the user has signed in (or failed to). Since there is no such Intent setup, it ends up calling the Fallback Intent, which apparently doesn't have an Intent Handler registered in your webhook.
To make your "Get Signin" Intent get the sign-in event, set the "Event" field to actions_intent_SIGN_IN. (Note the similarity to the Intent name you saw in the simulator, but using underscores instead of dots.)
As an aside, the simulator was showing you what the communication between the Assistant and Dialogflow looks like, so it can be somewhat confusing to understand what Dialogflow is doing with it. It didn't have anything to do with the name of your Intent or anything else.
Finally, it often isn't necessary to do this check. You will know if the user is signed in because either the auth token has been set or the id token has been set (depending on your method of Account Linking).

How to upload file to AWS S3 using AWS AppSync

Following this docs/tutorial in AWS AppSync Docs.
It states:
With AWS AppSync you can model these as GraphQL types. If any of your mutations have a variable with bucket, key, region, mimeType and localUri fields, the SDK will upload the file to Amazon S3 for you.
However, I cannot make my file to upload to my s3 bucket. I understand that tutorial missing a lot of details. More specifically, the tutorial does not say that the NewPostMutation.js needs to be changed.
I changed it the following way:
import gql from 'graphql-tag';
export default gql`
mutation AddPostMutation($author: String!, $title: String!, $url: String!, $content: String!, $file: S3ObjectInput ) {
addPost(
author: $author
title: $title
url: $url
content: $content
file: $file
){
__typename
id
author
title
url
content
version
}
}
`
Yet, even after I have implemented these changes, the file did not get uploaded...
There's a few moving parts under the hood you need to make sure you have in place before this "just works" (TM). First of all, you need to make sure you have an appropriate input and type for an S3 object defined in your GraphQL schema
enum Visibility {
public
private
}
input S3ObjectInput {
bucket: String!
region: String!
localUri: String
visibility: Visibility
key: String
mimeType: String
}
type S3Object {
bucket: String!
region: String!
key: String!
}
The S3ObjectInput type, of course, is for use when uploading a new file - either by way of creating or updating a model within which said S3 object metadata is embedded. It can be handled in the request resolver of a mutation via the following:
{
"version": "2017-02-28",
"operation": "PutItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.input.id),
},
#set( $attribs = $util.dynamodb.toMapValues($ctx.args.input) )
#set( $file = $ctx.args.input.file )
#set( $attribs.file = $util.dynamodb.toS3Object($file.key, $file.bucket, $file.region, $file.version) )
"attributeValues": $util.toJson($attribs)
}
This is making the assumption that the S3 file object is a child field of a model attached to a DynamoDB datasource. Note that the call to $utils.dynamodb.toS3Object() sets up the complex S3 object file, which is a field of the model with a type of S3ObjectInput. Setting up the request resolver in this way handles the upload of a file to S3 (when all the credentials are set up correctly - we'll touch on that in a moment), but it doesn't address how to get the S3Object back. This is where a field level resolver attached to a local datasource becomes necessary. In essence, you need to create a local datasource in AppSync and connect it to the model's file field in the schema with the following request and response resolvers:
## Request Resolver ##
{
"version": "2017-02-28",
"payload": {}
}
## Response Resolver ##
$util.toJson($util.dynamodb.fromS3ObjectJson($context.source.file))
This resolver simply tells AppSync that we want to take the JSON string that is stored in DynamoDB for the file field of the model and parse it into an S3Object - this way, when you do a query of the model, instead of returning the string stored in the file field, you get an object containing the bucket, region, and key properties that you can use to build a URL to access the S3 Object (either directly via S3 or using a CDN - that's really dependent on your configuration).
Do make sure you have credentials set up for complex objects, however (told you I'd get back to this). I'll use a React example to illustrate this - when defining your AppSync parameters (endpoint, auth, etc.), there is an additional property called complexObjectCredentials that needs to be defined to tell the client what AWS credentials to use to handle S3 uploads, e.g.:
const client = new AWSAppSyncClient({
url: AppSync.graphqlEndpoint,
region: AppSync.region,
auth: {
type: AUTH_TYPE.AWS_IAM,
credentials: () => Auth.currentCredentials()
},
complexObjectsCredentials: () => Auth.currentCredentials(),
});
Assuming all of these things are in place, S3 uploads and downloads via AppSync should work.
Just to add to the discussion. For mobile clients, amplify (or if doing from aws console) will encapsulate mutation calls into an object. The clients won't auto upload if the encapsulation exists. So you can modify the mutation call directly in aws console so that the upload file : S3ObjectInput is in the calling parameters. This was happening the last time I tested (Dec 2018) following the docs.
You would change to this calling structure:
type Mutation {
createRoom(
id: ID!,
name: String!,
file: S3ObjectInput,
roomTourId: ID
): Room
}
Instead of autogenerated calls like:
type Mutation {
createRoom(input: CreateRoomInput!): Room
}
input CreateRoomInput {
id: ID
name: String!
file: S3ObjectInput
}
Once you make this change both iOS and Android will happily upload your content if you do what #hatboyzero has outlined.
[Edit] I did a bit of research, supposedly this has been fixed in 2.7.3 https://github.com/awslabs/aws-mobile-appsync-sdk-android/issues/11. They likely addressed iOS but I didn't check.

AWS Error: Proxy integrations cannot be configured to transform responses

I'm a beginner in Amazon's Lambda-API implementations.
I'm just deploying a very simple API: a very simple lambda function with Python 2.7 printing "Hello World" that I trigger with API Gateway. However, when I click on the Invoke URL link, it tells me "{"message": "Internal server error"}".
Thus, I'm trying to see what is wrong here, so I click on the API itself and I can see the following being grey in my Method Execution: "Integration Response: Proxy integrations cannot be configured to transform responses."
I have tested many different configurations but I still face the same error. I have no idea why this step is grey.
I had the same problem when trying to integrate API gateway and lambda function. Basically, after spending a couple of hours, I figure out.
So when you were creating a new resource or method the Use Lambda Proxy integration was set by default.
So you need to remove this. Follow to Integration Request and untick the Use Lambda Proxy integration
you will see the following picture
Then in you Resources, Atction tab, choose Enable CORS
Once this done Deploy your API once again and test function. Also, this topic will explain what's happening under the hood.
Good luck...
The Lambda response should be in a specific format for API gateway to process. You could find details in the post. https://aws.amazon.com/premiumsupport/knowledge-center/malformed-502-api-gateway/
exports.handler = (event, context, callback) => {
var responseBody = {
"key3": "value3",
"key2": "value2",
"key1": "value1"
};
var response = {
"statusCode": 200,
"headers": {
"my_header": "my_value"
},
"body": JSON.stringify(responseBody),
"isBase64Encoded": false
};
callback(null, response);
My API was working in Postman but not locally when I was developing the front end. I was getting the same errors when trying to enable CORS on my resources for GET, POST and OPTIONS and after searching all over #aditya answer got me on the right track but I had to tweak my code slightly.
I needed to add the res.statusCodeand the two headers and it started working.
// GET
// get all myModel
app.get('/models/', (req, res) => {
const query = 'SELECT * FROM MyTable'
pool.query(query, (err, results, fields) => {
//...
const models = [...results]
const response = {
data: models,
message: 'All models successfully retrieved.',
}
//****** needed to add the next 3 lines
res.statusCode = 200;
res.setHeader('content-type', 'application/json');
res.setHeader('Access-Control-Allow-Origin', '*');
res.send(response)
})
})
If you re using terraform for aws resource provision you can set the
"aws_api_gateway_integration" type = "AWS" instead of "AWS_PROXY" and that should resolve your problem.