AppSync request mapping template errors not logged in CloudWatch - amazon-cloudwatch

Crosspost from: https://repost.aws/questions/QUp5jDZ6bsRkeXhIwHgQaWkg/app-sync-request-mapping-template-errors-not-logged-in-cloud-watch
I have a simple resolver that has a simple Lambda function as a data source. This function always throws an error (to test out logging).
The resolver has request mapping template enabled and it is configured as follows:
$util.error("request mapping error 1")
The API has logging configured to be as verbose as possible yet I cannot see this request mapping error 1 from my CloudWatch logs in RequestMapping log type:
{
"logType": "RequestMapping",
"path": [
"singlePost"
],
"fieldName": "singlePost",
"resolverArn": "xxx",
"requestId": "bab942c6-9ae7-4771-ba45-7911afd262ac",
"context": {
"arguments": {
"id": "123"
},
"stash": {},
"outErrors": []
},
"fieldInError": false,
"errors": [],
"parentType": "Query",
"graphQLAPIId": "xxx"
}
The error is not completely lost because I can see this error in the query response:
{
"data": {
"singlePost": null
},
"errors": [
{
"path": [
"singlePost"
],
"data": null,
"errorType": null,
"errorInfo": null,
"locations": [
{
"line": 2,
"column": 3,
"sourceName": null
}
],
"message": "request mapping error 1"
}
]
}
When I add $util.appendError("append request mapping error 1") to the request mapping template so it looks like this:
$util.appendError("append request mapping error 1")
$util.error("request mapping error 1")
Then the appended error appears in the RequestMapping log type but the errors array is still empty:
{
"logType": "RequestMapping",
"path": [
"singlePost"
],
"fieldName": "singlePost",
"resolverArn": "xxx",
"requestId": "f8eecff9-b211-44b7-8753-6cc6e269c938",
"context": {
"arguments": {
"id": "123"
},
"stash": {},
"outErrors": [
{
"message": "append request mapping error 1"
}
]
},
"fieldInError": false,
"errors": [],
"parentType": "Query",
"graphQLAPIId": "xxx"
}
When I do the same thing with response mapping template then everything works as expected (errors array contains $util.error(message) and outErrors array contains $util.appendError(message) messages.
Is this working as expected so the $util.error(message) will never show up in RequestMapping type CloudWatch logs?
Under what conditions will errors array in RequestMapping log type be populated?
Bonus question: can the errors array contain more than 1 item for either RequestMapping or ResponseMapping log types?

Related

Having issues reusing a stored variable in Graphql query in Karate framework

I have an issue with the variable usage. Tried different options(storing variable differently, declaring, using text for defining the query, storing the query as a variable).
Still have the below error:
"errors": [
{
"message": "invalid input syntax for type uuid: \"#(queueID)\"",
"locations": [
{
"line": 1,
"column": 11
}
],
"path": [
"deleteQueue"
],
"extensions": {
"code": "INTERNAL_SERVER_ERROR",
"exception": {
"name": "SequelizeDatabaseError",
"parent": {
"length": 109,
"name": "error",
"severity": "ERROR",
"code": "22P02",
"position": "34",
"file": "uuid.c",
"line": "137",
"routine": "string_to_uuid",
"sql": "DELETE FROM \"Queue\" WHERE \"id\" = '#(queueID)'"
This are my Gherkin steps:
Given request { query: 'mutation {createQueue(input: {name: "BDD-delete" }) {id} }'}
When method POST
Then status 200
And match response.data.createQueue.name == "BDD-delete"
* def queueID = response.data.createQueue.id
* print queueID
Given request { query: 'mutation {deleteQueue (id:"#(queueID)")} '}
And this is the output, when I print the queueID:
13:14:16.745 [main] INFO com.intuit.karate - [print] 758c0524-b18d-41f6-96aa-9db5eb8a7ac8
Tried using variable for the query
Given text payload =
"""
mutation {
createQueue(input: {name: "BDD-delete" })
{id, name}
}
"""
And the same tried for the deleteQueue
Feels like the issue is related with str and uuid. I must pass a uuid between the brackets in "#(queueID)"
First read this to get a sense of why this is happening: https://github.com/karatelabs/karate#rules-for-embedded-expressions
So try this:
Given request `{ query: 'mutation {deleteQueue (id:"${queueID}")} '}`
The good thing is that Karate supports JS-style placeholder replacement in strings within back-ticks.
Also refer: https://stackoverflow.com/a/69349118/143475

Facebook ads custom audience Data is missing or does not match schema error

i was building a integration with the facebook ads audience API, and according the documentation the request must be created like this:
POST - https://graph.facebook.com/v15.0/<MY_CUSTOM_AUDIENCE_ID>/users?access_token=<MY_ACCESS_TOKEN>
{
"session":{
"session_id":1,
"batch_seq":1,
"last_batch_flag":true,
"estimated_num_total":1
},
"payload":{
"schema":[
"FN"
],
"data":
[
"8b1ebea129cee0d2ca86be6706cd2dfcf79aaaea259fd0c311bdbf2a192be148"
]
}
}
Using the previus example a received a error 400:
{
"error": {
"message": "(#100) Data is missing or does not match schema",
"type": "OAuthException",
"code": 100,
"fbtrace_id": "AqrLd9uIw0D4BBFtHF33bdU"
}
}
For do this i used this documentation https://developers.facebook.com/docs/marketing-api/audiences/guides/custom-audiences#hash
Anyone has use this before?
Your schema field type is array but array use form multi-key qualification.
Change it to string: schema: 'FN'
In docs you can see all formats.
This payload with multi keys work for me:
{
"session": {
"session_id": 123,
"batch_seq": 1,
"last_batch_flag": true
},
"payload": {
"schema": [
"EMAIL",
"PHONE",
"FN"
],
"data": [
["EMAIL_HASH", "PHONE_HASH", "FN_HASH"]
]
}
}

BigQuery: Routine deployment failing with error "Unknown option: description"

We use terraform to deploy BigQuery objects (datasets, tables, routines etc..) to region europe-west2 in GCP. We do this many many times a day and all of a sudden at "2021-08-18T21:15:44.033910202Z" our deployments starting failing when attempting to deploy BigQuery routines. They are all failing with errors of the form:
status: {
code: 3
message: "Unknown option: description"
}
Here is the first log message I can find pertaining to this error (I have redacted project names):
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "Unknown option: description"
},
"authenticationInfo": {
"principalEmail": "deployer-dev#myadminproject.iam.gserviceaccount.com",
"serviceAccountDelegationInfo": [
{
"firstPartyPrincipal": {
"principalEmail": "deployer-dev#myadminproject.iam.gserviceaccount.com"
}
}
]
},
"requestMetadata": {
"callerIp": "10.51.0.116",
"callerSuppliedUserAgent": "Terraform/0.14.7 (+https://www.terraform.io) Terraform-Plugin-SDK/2.5.0 terraform-provider-google/3.69.0,gzip(gfe)",
"callerNetwork": "//compute.googleapis.com/projects/myadminproject/global/networks/__unknown__",
"requestAttributes": {},
"destinationAttributes": {}
},
"serviceName": "bigquery.googleapis.com",
"methodName": "google.cloud.bigquery.v2.RoutineService.InsertRoutine",
"authorizationInfo": [
{
"resource": "projects/myproject/datasets/p00003818_dp_model",
"permission": "bigquery.routines.create",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "projects/myproject/datasets/p00003818_dp_model/routines/UserProfile_Events_AllCarData_Deployment",
"metadata": {
"routineCreation": {
"routine": {
"routineName": "projects/myproject/datasets/p00003818_dp_model/routines/UserProfile_Events_AllCarData_Deployment"
},
"reason": "ROUTINE_INSERT_REQUEST"
},
"#type": "type.googleapis.com/google.cloud.audit.BigQueryAuditMetadata"
}
},
"insertId": "ak27xdbke",
"resource": {
"type": "bigquery_dataset",
"labels": {
"dataset_id": "p00003818_dp_model",
"project_id": "myproject"
}
},
"timestamp": "2021-08-18T21:15:43.109609Z",
"severity": "ERROR",
"logName": "projects/myproject/logs/cloudaudit.googleapis.com%2Factivity",
"receiveTimestamp": "2021-08-18T21:15:44.033910202Z"
}
The fact that this occurred without any changes by ourselves indicates that this is a problem at the Google end. I also observe that whilst we witnessed this in a few projects it occurred first in one project and then a few minutes later in another - that may or may not be helpful information.
Posting here in case anyone else hits this problem and also hoping it might catch the attention of a googler.
UPDATE! I have reproduced the pproblem using the REST API https://cloud.google.com/bigquery/docs/reference/rest/v2/routines/insert
I have entered a payload that does not include a description and that successfully creates a routine:
However, if I include a description which, as this screenshot indicates, is a valid parameter:
then the request fails:

MFP 8 - error parsing JSON object when using MobileFirst Push API

I am getting the following error when I tried to used the push API to send a notification. The JSON object works in version V7.1
{
"code": "FPWSE0011E",
"message": "Bad Request - The JSON validation failed at 'target'.",
"productVersion": "8.0.0.00-20161122-1902"
}
Here is my JSON object
{
"message": {
"alert": "hello"
},
"settings": {
"apns": {
"badge": 1,
"iosActionKey": "Ok",
"payload": {
"messageType": "HELLO",
"detail": "Here's your message details."
},
"sound": "song.mp3"
},
"gcm": {
"payload": {},
"sound": "song.mp3"
}
},
"target": {
"consumerIds": [],
"deviceIds": ["4A1086CF-873A-4404-BE2D-200EA6BDA8AD"],
"platforms": [
"A","G"
]
}
}
I am using the admin RestAPi interface
https://myserver/mfpadmin/management-apis/2.0/runtimes/mfp/notifications/applications/com.myjobs/messages
I followed the format from the documentation
http://www.ibm.com/support/knowledgecenter/SSHS8R_8.0.0/com.ibm.worklight.apiref.doc/apiref/r_restapi_send_message_post.html
Thanks for your help
According to the v8.0 documentation only 1 property is allowed in target. In your JSON I see several properties are defined.
See example JSON here: https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/notifications/sending-notifications/#sending-notifications
And as can be seen:
target" : {
// The list below is for demonstration purposes only - per the documentation only 1 target is allowed to be used at a time.
"deviceIds" : [ "MyDeviceId1", ... ],
"platforms" : [ "A,G", ... ],
"tagNames" : [ "Gold", ... ],
"userIds" : [ "MyUserId", ... ],
},

Invalid Path error while inserting job from google cloud storage to google bigquery

I am trying to insert a job through HTTP Post request, but i am getting Invalid path error.
My request body is as follows:
{
"configuration": {
"load": {
"sourceUris": [
"gs://onianalytics/PersData.csv"
],
"schema": {
"fields": [
{
"name": "Name",
"type": "STRING"
},
{
"name": "Age",
"type": "INTEGER"
}
]
},
"destinationTable": {
"datasetId": "Test_Dataset",
"projectId": "lithe-anvil-404",
"tableId": "tb_test_Pers"
}
}
},
"jobReference": {
"jobId": "10",
"projectId": "lithe-anvil-404"
}
}
For the sourceuri parameter, I am passing "gs://onianalytics/PersData.csv", where onianalytics is my bucket name and PersData.csv is my csv file (from which I want to upload data into google bigquery).
I am getting below response:
"status": {
"state": "DONE",
"errorResult": {
"reason": "invalid",
"message": "Invalid path: gs://onianalytics/PersData.csv"
},
"errors": [
{
"reason": "invalid",
"message": "Invalid path: gs://onianalytics/PersData.csv"
}
]
},
"statistics": {
"creationTime": "1387276603674",
"startTime": "1387276603751",
"endTime": "1387276603751"
}
}
My bucket is under the same projectid which has the BigQuery service activated. Also, I have Google Cloud Storage enabled under APIs and Auth. Following scopes are added while authenticating:
googleapis.com/auth/bigquery, googleapis.com/auth/cloud-platform, googleapis.com/auth/devstorage.full_control,googleapis.com/auth/devstorage.read_only,googleapis.com/auth/devstorage.read_write
I am inserting this job by "Try it!" link which is available on developers.google.com/bigquery/docs/reference/v2/jobs/insert.
In fact I am able to create buckets and objects in goggle cloud storage through APIs. But when i try to insert job from the uploaded object (which is a csv file), i got "Invalid Path" error. Can anyone please help me to identify why this error is occurring?
The error I get when trying the code above is "Not found: URI gs://onianalytics/PersData.csv".
I'm wondering if instead of /onianalytics/ you had a different path with invalid characters?