I have a Chalice app that reads config data from a file in an S3 bucket. The file can change from time to time, and I want the app to immediately use the updated values, so I am using the on_s3_event decorator to reload the config file.
My code looks something like this (stripped way down for clarity):
CONFIG = {}
app = Chalice(app_name='foo')
#app.on_s3_event(bucket=S3_BUCKET, events=['s3:ObjectCreated:*'],
prefix='foo/')
def event_handler(event):
_load_config()
def _load_config():
# fetch json file from S3 bucket
CONFIG['foo'] = some item from the json file...
CONFIG['bar'] = some other item from the json file...
_load_config()
#app.route('/')
def home():
# refer to CONFIG values here
My problem is that for a short while (maybe 5-10 minutes) after uploading a new version of the config file, the app still uses the old config values.
Am I doing this wrong? Should I not be depending on global state in a Lambda function at all?
So your design here is flawed.
When you create an S3 Event in chalice it will create a separate Lambda function for that event. the CONFIG variable would get updated in the running instance of that Lambda function and all new instances of the Lambda function. However any other Lambdas in your Chalice app that are already running would just continue on with their current settings until they were cleaned up and restarted.
If you cannot live with a config that is only changeable when you deploy your Lambda functions you could use redis or some other in memory cache/db.
You should be using the .config/config.json file to store your variables for your chalice application. Those variables are stored in the os library and can be called:
URL = os.environ['MYVAR']
Your config.json file might look like this:
{
"version": "2.0",
"app_name": "MyApp",
"manage_iam_role": false,
"iam_role_arn": "arn:aws:iam::************:role/Chalice",
"lambda_timeout": 300,
"stages": {
"development": {
"environment_variables": {
"MYVAR": "foo"
}
},
"production": {
"environment_variables": {
"MYVAR": "bar"
}
}
},
"lambda_memory_size": 2048
}
Related
I'm creating a scheduled ECS task in Terraform. When I try to override the container definition for the entryPoint, the resulting task does not use the overridden entryPoint. However, if I try to override the command, it works fine (adds a new command in addition to existing entry point). I cannot find anything in the docs that lead me to believe that there is no support for entryPoint overriding but that may be the case?
Below is the code for the Cloudwatch event target in terraform
resource "aws_cloudwatch_event_target" "ecs_task" {
target_id = "run-${var.task_name}-scheduled"
arn = "${var.cluster_arn}"
rule = "${aws_cloudwatch_event_rule.ecs_task_event_rule.name}"
role_arn = "${aws_iam_role.ecs_event.arn}"
ecs_target = {
launch_type = "${var.launch_type}"
network_configuration = {
subnets = ["${var.subnet_ids}"]
security_groups = ["${var.security_group_ids}"]
}
task_count = 1
task_definition_arn = "${var.task_arn}"
}
input = <<DOC
{
"containerOverrides": [
{
"name": "${var.task_name}",
"entryPoint": ${jsonencode(var.command_overrides)}
}
]
}
DOC
}
This creates a new scheduled task on the AWS console, where the input field is the following:
{
"containerOverrides": [
{
"name": "my-container-name",
"entryPoint": [
"sh",
"/my_script.sh"
]
}
]
}
However tasks launched by this rule do not have the entry point override and use the entrypoint defined in the original task definition.
TLDR: How can I override the entrypoint for a scheduled task?
As of today, only a certain number of fields can be overridden as the scheduled task ultimately uses the run-task API. These fields are the following:
command
environment
taskRoleArn
cpu
memory
memoryReservation
resourceRequirements
Container definitions for other fields are not supported, such as entryPoint, portMappings, and logConfiguration.
The solution is to use command instead of entryPoint in the original task definition, as command can be overridden but entryPoint cannot.
I have created userpool and trying to migrate user from RDS which invokes lambda function that returns the updated event object. but its not working for me.
I have followed as provided solution by removing below 2 fields, still not working .. :(
"desiredDeliveryMediums": "EMAIL",
"forceAliasCreation": "false"
Here is the response object that am sending from lambda. still facing same issue - Exception during user migration
Please let me know what am missing here. Thanks in advance
def lambda_handler(event, context):
print event
event["response"] = {
"userAttributes": {
"email": event["userName"],
"email_verified": "true",
},
"finalUserStatus": "CONFIRMED",
"messageAction": "SUPPRESS",
"desiredDeliveryMediums": "EMAIL",
"forceAliasCreation": "false"
}
print event
return event
I was having this problem, and I overcame it by increasing the memory allocated to the lambda from the default 128MB to 1024MB. I am using cdk to deploy, so I did this in the lamdba creation:
const nodeUserMigration = new NodejsFunction(this, 'myLambdaName', {
entry: path.join(
__dirname,
'userMigration.ts'
),
runtime: Runtime.NODEJS_18_X,
timeout: Duration.minutes(5),
memorySize: 1024, // This is what I added to overcome the `UserNotFoundException: Exception migrating user in app client (redactedClientId)`
environment: {
// redacted environment variables
},
});
Instead of
return event
You need
context.succeed(event)
It is probably possible to use return event directly; however, there would be other properties required to get Cognito to recognize it (things such as isBase64Encoded) and I don't know what they might be. Neither does Amazon have any documentation on them.
Oh, and desiredDeliveryMediums should be an array of strings.
I am trying to upload images to my S3 bucket when sending chat messages to my Aurora Database using AppSync with Lambda configured as it's data source.
My resolver for the mutation is:
{
"version": "2017-02-28",
"operation": "Invoke",
"payload": {
"field": "createMessage",
"arguments": $utils.toJson($context.arguments)
}
}
The messages are being saved correctly in the database however the S3 image data files are not being saved in my S3 bucket. I believe I have configured everything correctly except for the resolver which I am not sure about.
Uploading files with AppSync when data source is lambda is basically the same as for every other data source and it does not depend on resolvers.
Just make sure you have your credentials for complex objects set up (JS example using Amplify library for authorization):
import { Auth } from 'aws-amplify'
const client = new AWSAppSyncClient({
url: /*your endpoint*/,
region: /*your region*/,
complexObjectsCredentials: () => Auth.currentCredentials(),
})
And also you need to provide S3 complex object as an input type for your mutation:
input S3ObjectInput {
bucket: String!
key: String!
region: String!
localUri: String
mimeType: String
}
Everything else will work just fine even with lambda data source. Here you can find more information related to your question(in that example dynamoDB is used but it is basically the same for lambda: https://stackoverflow.com/a/50218870/9359164
Following this docs/tutorial in AWS AppSync Docs.
It states:
With AWS AppSync you can model these as GraphQL types. If any of your mutations have a variable with bucket, key, region, mimeType and localUri fields, the SDK will upload the file to Amazon S3 for you.
However, I cannot make my file to upload to my s3 bucket. I understand that tutorial missing a lot of details. More specifically, the tutorial does not say that the NewPostMutation.js needs to be changed.
I changed it the following way:
import gql from 'graphql-tag';
export default gql`
mutation AddPostMutation($author: String!, $title: String!, $url: String!, $content: String!, $file: S3ObjectInput ) {
addPost(
author: $author
title: $title
url: $url
content: $content
file: $file
){
__typename
id
author
title
url
content
version
}
}
`
Yet, even after I have implemented these changes, the file did not get uploaded...
There's a few moving parts under the hood you need to make sure you have in place before this "just works" (TM). First of all, you need to make sure you have an appropriate input and type for an S3 object defined in your GraphQL schema
enum Visibility {
public
private
}
input S3ObjectInput {
bucket: String!
region: String!
localUri: String
visibility: Visibility
key: String
mimeType: String
}
type S3Object {
bucket: String!
region: String!
key: String!
}
The S3ObjectInput type, of course, is for use when uploading a new file - either by way of creating or updating a model within which said S3 object metadata is embedded. It can be handled in the request resolver of a mutation via the following:
{
"version": "2017-02-28",
"operation": "PutItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.input.id),
},
#set( $attribs = $util.dynamodb.toMapValues($ctx.args.input) )
#set( $file = $ctx.args.input.file )
#set( $attribs.file = $util.dynamodb.toS3Object($file.key, $file.bucket, $file.region, $file.version) )
"attributeValues": $util.toJson($attribs)
}
This is making the assumption that the S3 file object is a child field of a model attached to a DynamoDB datasource. Note that the call to $utils.dynamodb.toS3Object() sets up the complex S3 object file, which is a field of the model with a type of S3ObjectInput. Setting up the request resolver in this way handles the upload of a file to S3 (when all the credentials are set up correctly - we'll touch on that in a moment), but it doesn't address how to get the S3Object back. This is where a field level resolver attached to a local datasource becomes necessary. In essence, you need to create a local datasource in AppSync and connect it to the model's file field in the schema with the following request and response resolvers:
## Request Resolver ##
{
"version": "2017-02-28",
"payload": {}
}
## Response Resolver ##
$util.toJson($util.dynamodb.fromS3ObjectJson($context.source.file))
This resolver simply tells AppSync that we want to take the JSON string that is stored in DynamoDB for the file field of the model and parse it into an S3Object - this way, when you do a query of the model, instead of returning the string stored in the file field, you get an object containing the bucket, region, and key properties that you can use to build a URL to access the S3 Object (either directly via S3 or using a CDN - that's really dependent on your configuration).
Do make sure you have credentials set up for complex objects, however (told you I'd get back to this). I'll use a React example to illustrate this - when defining your AppSync parameters (endpoint, auth, etc.), there is an additional property called complexObjectCredentials that needs to be defined to tell the client what AWS credentials to use to handle S3 uploads, e.g.:
const client = new AWSAppSyncClient({
url: AppSync.graphqlEndpoint,
region: AppSync.region,
auth: {
type: AUTH_TYPE.AWS_IAM,
credentials: () => Auth.currentCredentials()
},
complexObjectsCredentials: () => Auth.currentCredentials(),
});
Assuming all of these things are in place, S3 uploads and downloads via AppSync should work.
Just to add to the discussion. For mobile clients, amplify (or if doing from aws console) will encapsulate mutation calls into an object. The clients won't auto upload if the encapsulation exists. So you can modify the mutation call directly in aws console so that the upload file : S3ObjectInput is in the calling parameters. This was happening the last time I tested (Dec 2018) following the docs.
You would change to this calling structure:
type Mutation {
createRoom(
id: ID!,
name: String!,
file: S3ObjectInput,
roomTourId: ID
): Room
}
Instead of autogenerated calls like:
type Mutation {
createRoom(input: CreateRoomInput!): Room
}
input CreateRoomInput {
id: ID
name: String!
file: S3ObjectInput
}
Once you make this change both iOS and Android will happily upload your content if you do what #hatboyzero has outlined.
[Edit] I did a bit of research, supposedly this has been fixed in 2.7.3 https://github.com/awslabs/aws-mobile-appsync-sdk-android/issues/11. They likely addressed iOS but I didn't check.
I keep any hard coding information inside models/config.js, but i'm not sure that the models/config.js file is the correct place for a port.
Keep a ./config/my_database_config.js and put all there.
Similar for ./config/main_server_config.js
usually all other config files can also go there.
You can hardcode values in this my_database_config.js file . Or this file could suppose make a request to the server for the config file that returns a following json.
The config could be a json of type :
configJson = {
"env_production": {
"db_host_production": "www.host.production.url",
"db_password_production": "www.host.production.password"
},
"env_staging": {
"db_host_staging": "www.host.staging.url",
"db_password_staging": "www.host.staging.password"
},
"env_local": {
"db_host_local": "www.host.local.url",
"db_password_local": "www.host.local.password"
}
}
If its is just for local testing puposes you could even pass in config values as env variables to the json in config.js