How does loopback lb4 create the queries shown in the browser? - react-admin

When I run my react app
I get err msg:
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
and the browser line 158
let response = await ftch(url.toString(), options);
in C:...\node_modules\react-admin-lb4\index.js
lb4 generates a query in the webbrowser
{
"offset": 0,
"limit": 100,
"skip": 0,
"order": "string",
"where": {
"additionalProp1": {}
},
"fields": {
"id": true,
"name": true,
"average": true
}
}
I got an error with status code 500 even when I run the query from the browser
then I change the line:
"order": "string",
to:
"order": "",
now I get status code 200, and an array of the data table content
i.e. correct. so far so good.
BUT: I don't know where in the autogenerated *.ts code by lb4 I shall
modify so that it doesn't generate "order": "string",
tried to figure out where / which file is responsible
could not find

Related

Getting Error while updating "enableEmbed": true,

Description
When I am trying to update the Allow Embed flag with True("enableEmbed": true) for the live video but am getting Error of
"code": 400,
"message": "Embed setting was invalid",
But when doing "enableEmbed": false it's working fine only issue when making "enableEmbed": true
API request with parameters used
Request URL: https://youtube.googleapis.com/youtube/v3/liveBroadcasts?part=contentDetails&key=AIzaSy...
Parameters
{
"id": "",
"contentDetails": {
"enableEmbed": true,
"monitorStream": {
"enableMonitorStream": true,
"broadcastStreamDelayMs": 5
}
}
}
Result (copy and paste a JSON response you received)
Error on "enableEmbed": true,
{
"error": {
"code": 400,
"message": "Embed setting was invalid",
"errors": [
{
"message": "Embed setting was invalid",
"domain": "youtube.liveBroadcast",
"reason": "invalidEmbedSetting",
"extendedHelp": "https://developers.google.com/youtube/v3/live/docs/liveBroadcasts#contentDetails.enableEmbed"
}
]
}
}
Expected result
{
"kind": "youtube#liveBroadcast",
"etag": "xxxx",
"id": "xxxxx",
"contentDetails": {
"boundStreamId": "xxxxxxxx",
"boundStreamLastUpdateTimeMs": "2022-10-31T21:57:29Z",
"monitorStream": {
"enableMonitorStream": true,
"broadcastStreamDelayMs": 5,
"embedHtml": ""
},
"enableEmbed": true,
"enableDvr": true,
"enableContentEncryption": false,
"startWithSlate": false,
"recordFromStart": true,
"enableClosedCaptions": false,
"closedCaptionsType": "closedCaptionsDisabled",
"enableLowLatency": false,
"latencyPreference": "normal",
"projection": "rectangular",
"enableAutoStart": true,
"enableAutoStop": true
}
}
Is it 100% reproducible?
I am testing with two accounts. On one account same code working fine but on another account it's giving error
Reproducible API explorer link
it's not Reproduciing on Every Account. It's giving error on some particular account don't know what's the issue
Trying to resolve error of "enableEmbed": true, Tried multiple things but no luck looking forward if someone can help on this. Having issue with some particular account not to every account

AppSync request mapping template errors not logged in CloudWatch

Crosspost from: https://repost.aws/questions/QUp5jDZ6bsRkeXhIwHgQaWkg/app-sync-request-mapping-template-errors-not-logged-in-cloud-watch
I have a simple resolver that has a simple Lambda function as a data source. This function always throws an error (to test out logging).
The resolver has request mapping template enabled and it is configured as follows:
$util.error("request mapping error 1")
The API has logging configured to be as verbose as possible yet I cannot see this request mapping error 1 from my CloudWatch logs in RequestMapping log type:
{
"logType": "RequestMapping",
"path": [
"singlePost"
],
"fieldName": "singlePost",
"resolverArn": "xxx",
"requestId": "bab942c6-9ae7-4771-ba45-7911afd262ac",
"context": {
"arguments": {
"id": "123"
},
"stash": {},
"outErrors": []
},
"fieldInError": false,
"errors": [],
"parentType": "Query",
"graphQLAPIId": "xxx"
}
The error is not completely lost because I can see this error in the query response:
{
"data": {
"singlePost": null
},
"errors": [
{
"path": [
"singlePost"
],
"data": null,
"errorType": null,
"errorInfo": null,
"locations": [
{
"line": 2,
"column": 3,
"sourceName": null
}
],
"message": "request mapping error 1"
}
]
}
When I add $util.appendError("append request mapping error 1") to the request mapping template so it looks like this:
$util.appendError("append request mapping error 1")
$util.error("request mapping error 1")
Then the appended error appears in the RequestMapping log type but the errors array is still empty:
{
"logType": "RequestMapping",
"path": [
"singlePost"
],
"fieldName": "singlePost",
"resolverArn": "xxx",
"requestId": "f8eecff9-b211-44b7-8753-6cc6e269c938",
"context": {
"arguments": {
"id": "123"
},
"stash": {},
"outErrors": [
{
"message": "append request mapping error 1"
}
]
},
"fieldInError": false,
"errors": [],
"parentType": "Query",
"graphQLAPIId": "xxx"
}
When I do the same thing with response mapping template then everything works as expected (errors array contains $util.error(message) and outErrors array contains $util.appendError(message) messages.
Is this working as expected so the $util.error(message) will never show up in RequestMapping type CloudWatch logs?
Under what conditions will errors array in RequestMapping log type be populated?
Bonus question: can the errors array contain more than 1 item for either RequestMapping or ResponseMapping log types?

BigQuery Execute fails with no meaningful error on Cloud Data Fusion

I'm trying to use the BigQuery Execute function in Cloud Data Fusion (Google). The component validates fine, the SQL checks out but I get this non-meaningful error with every execution:
02/11/2022 12:51:25 ERROR Pipeline 'test-bq-execute' failed.
02/11/2022 12:51:25 ERROR Workflow service 'workflow.default.test-bq-execute.DataPipelineWorkflow.<guid>' failed.
02/11/2022 12:51:25 ERROR Program DataPipelineWorkflow execution failed.
I can see nothing else to help me debug this. Any ideas? The SQL in question is a simple DELETE from dataset.table WHERE ds = CURRENT_DATE()
This was the pipeline
{
"name": "test-bq-execute",
"description": "Data Pipeline Application",
"artifact": {
"name": "cdap-data-pipeline",
"version": "6.5.1",
"scope": "SYSTEM"
},
"config": {
"resources": {
"memoryMB": 2048,
"virtualCores": 1
},
"driverResources": {
"memoryMB": 2048,
"virtualCores": 1
},
"connections": [],
"comments": [],
"postActions": [],
"properties": {},
"processTimingEnabled": true,
"stageLoggingEnabled": false,
"stages": [
{
"name": "BigQuery Execute",
"plugin": {
"name": "BigQueryExecute",
"type": "action",
"label": "BigQuery Execute",
"artifact": {
"name": "google-cloud",
"version": "0.18.1",
"scope": "SYSTEM"
},
"properties": {
"project": "auto-detect",
"sql": "DELETE FROM GCPQuickStart.account WHERE ds = CURRENT_DATE()",
"dialect": "standard",
"mode": "batch",
"dataset": "GCPQuickStart",
"table": "account",
"useCache": "false",
"location": "US",
"rowAsArguments": "false",
"serviceAccountType": "filePath",
"serviceFilePath": "auto-detect"
}
},
"outputSchema": [
{
"name": "etlSchemaBody",
"schema": ""
}
],
"id": "BigQuery-Execute",
"type": "action",
"label": "BigQuery Execute",
"icon": "fa-plug"
}
],
"schedule": "0 1 */1 * *",
"engine": "spark",
"numOfRecordsPreview": 100,
"maxConcurrentRuns": 1
}
}
I was able to catch the error using Cloud Logging. To enable Cloud Logging in Cloud Data Fusion, you may use this GCP Documentation. And follow these steps to view the logs from Data Fusion to Cloud Logging. Replicating your scenario this is the error I found:
"logMessage": "Program DataPipelineWorkflow execution failed.\njava.util.concurrent.ExecutionException: com.google.cloud.bigquery.BigQueryException: Cannot set destination table in jobs with DML statements\n at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)\n at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)\n at io.cdap.cdap.internal.app.runtime.distributed.AbstractProgramTwillRunnable.run(AbstractProgramTwillRunnable.java:274)\n at org.apache.twill.interna..."
}
What we did to resolve this error: Cannot set destination table in jobs with DML statements is we left the Dataset Name and Table Name empty inside the pipeline properties as there is no need for the destination table to be specified.
Output:

Datatables - parameter 0 for fow 0

$('#datatable').DataTable({
"processing": true,
"serverSide": true,
"ajax": {
"url": "../../WebPost/AjaxPinToFolderSearch",
"data": function (d) {
d.postID = globalPinToFolderSearchID;
},
"columns": [
{ "data": "Folder", "defaultContent": "Value Not Received" },
{ "data": "Pinned", "defaultContent": "Value Not Received" },
{ "data": "StartDate", "defaultContent": "Value Not Received" },
{ "data": "EndDate", "defaultContent": "Value Not Received" }
]
}
});
With example response (taken from developer tools Network Response):
{"data":[{"Folder":"Home/Test One/Frogger","Pinned":false,"StartDate":"\/Date(18000000)\/","EndDate":"\/Date(18000000)\/"}]}
Here is an example showing the error message: http://lektrikpuke-001-site1.ctempurl.com/
Datatables appears to be working correctly in that it is requesting and receiving data. The error pops up, the table displays empty rows (responsive - 1 row of data = 1 row in table, 10 rows of data = 10 blank rows in table). I realize this is a common question, but I cannot figure out what is wrong. As a note, backend is C#.
Minor issue: The columns option shouldn't be a part of the ajax options. Move it out and it'll work without any errors as the DataTable now will receive the correct columns (which in your case was null). I tested it in the console and it worked. Let me know if that doesn't work for you.

Using the advanced query api and get all pages back

I can successfully call into an advanced query method and get the first page of data back (using the post option) refernced in http://api.docs.import.io/#QueryMethods
Anyone have an idea how to page after that? I get 20 out of 190 results. My query looks like:
var query = {
"input": { "last_name": name },
"additionalInput": {
"8d817939-my-api-key-9502ed72": cookie
},
"returnPaginationSuggestions": true
}
Where param name and cookie are known vars.
The results do not return a pagination block either as in the model result:
{
"connectorVersionGuid": "string",
"pagination": {
"pattern": "string",
"next": "string",
"currentPageNum": 0,
"previous": "string"
},
"connectorGuid": "string",
"totalResults": 0,
"errorType": "TimeoutException",
"outputProperties": [
{
"type": "STRING",
"name": "string"
}
],
"cookies": [
"string"
],
"results": [
{}
],
"pageUrl": "string",
"error": "string",
"data": {}
}
If the response is not returning the "Pagination" block, it means that the system was not able to identify pagination on a given page.
As far as I remember pagination is flaky for Extractor APIs, while it works quite well for Magic APIs. I would recommend trying to get a Magic extractor, and getting pagination suggestions for it. Than you should be able to get the "Pagination" block in your response, and use "next" value to get the URL of the next page.