In Amazon's docs an example handler of a trigger is given in JavaScript.
The JavaScript example ends with context.done… but the Python context variable doesn't have a done function.
How can one write trigger handlers for AWS User Pools?
After some experimenting, the answer turns out to be straightforward.
def handler(event, context):
event['response']['autoConfirmUser'] = False
event['response']['autoVerifyEmail'] = False
event['response']['autoVerifyPhone'] = False
return event
Related
I tried my best but could not find information on calling an API inside the Javascript function when dealing with automation in Karate. Now, I might get suggestions to call the API outside the function and then do operations inside the function. However, my use case is such that I have to call the API inside the function only. Is there a way to do this?
One approach is to create a Java file and then write the code in java. However, I specifically want to know if there is any way to call an API inside a JS function in a FEATURE FILE itself.
First, these kinds of "clever" tests are not recommended, please read this to understand why: https://stackoverflow.com/a/54126724/143475
If you still want to do this, read on.
First - most of the time, this kind of need can be achieved by doing a call to a second feature file:
* if (condition) karate.call('first.feature')
Finally, this is an experimental and un-documented feature in Karate, but there is a JS API to perform HTTP requests:
* eval
"""
var http = karate.http('https://httpbin.org');
http.path('anything');
var response = http.get().body;
karate.log('response:', response);
"""
It is a "fluent API" so you can do everything in one-line:
var body = karate.http('https://httpbin.org/get').get().body;
If you need details, read the source-code of the HttpRequestBuilder and Response classes in the Karate project.
I have an aws setup that requires me to assume role and get corresponding credentials in order to write to s3. For example, to write with aws cli, I need to use --profile readwrite flag. If I write code myself with boot, I'd assume role via sts, get credentials, and create new session.
However, there is a bunch of applications and packages relying on boto3's configuration, e.g. internal code runs like this:
s3 = boto3.resource('s3')
result_s3 = s3.Object(bucket, s3_object_key)
result_s3.put(
Body=value.encode(content_encoding),
ContentEncoding=content_encoding,
ContentType=content_type,
)
From documentation, boto3 can be set to use default profile using (among others) AWS_PROFILE env variable, and it clearly "works" in terms that boto3.Session().profile_name does match the variable - but the applications still won't write to s3.
What would be the cleanest/correct way to set them properly? I tried to pull credentials from sts, and write them as AWS_SECRET_TOKEN etc, but that didn't work for me...
Have a look at the answer here:
How to choose an AWS profile when using boto3 to connect to CloudFront
You can get boto3 to use the other profile like so:
rw = boto3.session.Session(profile_name='readwrite')
s3 = rw.resource('s3')
I think the correct answer to my question is one shared by Nathan Williams in the comment.
In my specific case, given that I had to initiate code from python, and was a bit worried about setting AWS settings that might spill into other operations, I used
the fact that boto3 has DEFAULT_SESSION singleton, used each time, and just overwrote this with a session that assumed the proper role:
hook = S3Hook(aws_conn_id=aws_conn_id)
boto3.DEFAULT_SESSION = hook.get_session()
(here, S3Hook is airflow's s3 handling object). After that (in the same runtime) everything worked perfectly
According to documentation
ClientMetadata will not be provided inside custom message trigger.
Are there alternatives/workarounds, to pass ClientMetadata to custom message trigger using purely cognito hooks/triggers?
I was able to achieve this, by using a cognito hook(pre-authentication) that is triggered before custom message hook.
My use case, is to send a custom message to a user, depending on the content of ClientMetadata provided by user.
I was able to achieve this, because pre-authentication hook executes before custom message hook.
I was able to save the ClientMetadata somewhere in server on pre-auth hook, then fetch it when the next hook(custom message) triggers. Not very fancy, but was done w/ pure cognito hooks :))
I would like to add a trigger on my lambda function using serverless.yml instead of configure it manually.
I’m trying to recover QueueArn using Fn::Get because it’s not a good practice paste the entire string.
Here is what I’m trying:
resources:
Resources:
WaitingSQS:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:provider.environment.MY_QUEUE_SQS}
consumerCallbackQueue:
handler: src/consumer_callback_queue.handler
description: Consume SQS callback queue
events:
- sqs: { "Fn::GetAtt" : ["WaitingSQS", "Arn"]}
This is not working. My lambda function is deployed without any errors but the trigger is not added. If I replace sqs attribute value with the string ‘arn:aws:sqs:us-east-1:XXXXXXXX:waiting-dev‘, it works like a charm.
How can I change my code to make it work?
I know it is probably late but I had a similar issue. Check in the console if the SQS trigger is enabled, in some cases it can get automatically disabled, that is what happened to me.
I want to make a simple http rest call to a google machine learning predict endpoint, but I can't find any information on how to do that. As far as I can tell from the limited documentation, you have to use either the Java or Python library (or figure out how to properly encrypt everything when using the REST auth endpoints) and get a credentials object. Then the instructions end and I have no idea how to actually use my credentials object. This is my code so far:
import urllib2
from google.oauth2 import service_account
# Constants
ENDPOINT_URL = 'ml.googleapis.com/v1/projects/{project}/models/{model}:predict?access_token='
SCOPES = ['https://www.googleapis.com/auth/cloud-platform']
SERVICE_ACCOUNT_FILE = 'service.json'
credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES)
access_token=credentials.token
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request(ENDPOINT_URL + access_token)
request.get_method = lambda: 'POST'
result = opener.open(request).read()
print(str(result))
If I print credentials.valid it returns False, so I think there is an issue with the credentials object init but I don't know what since no errors are reported, the fields are all correct inside the credentials object, and I did everything according to the instructions. Also my service.json is the same one our mobile team is successfully using to get an access token so I know the json file has the correct data.
How do I get an access token for the machine learning service that I can use to call the predict endpoint?
It turns out the best way to do a simple query is to use the gcloud console. I ended up following the instructions here to setup my environment: https://cloud.google.com/sdk/docs/quickstart-debian-ubuntu
Then the instructions here to actually hit the endpoint (with some help from the person that originally setup the model):
https://cloud.google.com/sdk/gcloud/reference/ml-engine/predict
It was way easier than trying to use the python library and I highly recommend it to anyone trying to just hit the predict endpoint.