I have this JSON:
{'category':'runtime_exception',
'action':"Cannot read property 'isCurrentlySorted' of undefined",
'value':'TypeError: Cannot read property "isCurrentlySorted" of undefined at Ht.SetSectionCurrentlySorted',
'current_url':'/drugs-list',
'serial_number':'A4001-2816',
'session_id':null,
'timeStamp':'2021-08-10T14:11:10.309Z',
'localTimeStamp':'Tue Aug 10 2021 17:11:10 GMT+0300 (Israel Daylight Time)',
'user':{'machine':null,'info':{'user_id':'yuval.haliva#eitanmedical.com'}},
'event_id':9}
When uploading it to bigquery, I get this error:
Error while reading data, error message: JSON parsing error in row starting at position 0: No such field: category.
Your JSON is invalid. Here is a quick overview for basic JSON syntax: https://www.w3schools.com/js/js_json_syntax.asp
In short, use double quotes (") to encapsulate your keys and values instead of single quotes ('). There are a lot of online JSON validators out there. Here is an example: https://jsonformatter.curiousconcept.com/
It auto-corrects your JSON to:
{
"category":"runtime_exception",
"action":"Cannot read property 'isCurrentlySorted' of undefined",
"value":"TypeError: Cannot read property \"isCurrentlySorted\" of undefined at Ht.SetSectionCurrentlySorted",
"current_url":"/drugs-list",
"serial_number":"A4001-2816",
"session_id":null,
"timeStamp":"2021-08-10T14:11:10.309Z",
"localTimeStamp":"Tue Aug 10 2021 17:11:10 GMT+0300 (Israel Daylight Time)",
"user":{
"machine":null,
"info":{
"user_id":"yuval.haliva#eitanmedical.com"
}
},
"event_id":9
}
Related
I am new in this api which is MediaWiki web service API.
I am trying to pull data in above table of this url: https://lessonslearned.em.se.com/lessons/Main_Page
I'm testing it using its sandbox: https://lessonslearned.em.se.com/lessons/Special:ApiSandbox
Now, I can view the source of the page for example:
{{#ask:[[Category:Lesson]][[Has region::NAM]][[Creation date::>{{#time: r | -1 year}}]]|format=count}}
Above code should return lessons from NAM 12 months ago which has a value of 1 based on the table. But I am getting an error in this part [[Creation date::>{{#time: r | -1 year}}]].
Error message:{ "error": { "query": [ "\"Thu, 31 Mar 2022 03:50:49 +0000 -1 03202233103\" contains an extrinsic dash or other characters that are invalid for a date interpretation." ] } }
I tried to break the code in each part and found out that the root cause is the hyphen(-) in -1 year. I also check the documentation of the time parser function(https://www.mediawiki.org/wiki/Help:Extension:ParserFunctions##time) and they have samples similar to this, But in my case it is not working.
Hope someone can give me at least reference for this problem. Thanks!
How to serialize Timestamp into Json field in NiFi ValidateRecord processor / JsonRecordSetWriter.
On input, I have a CSV file with a timestamp column with the format yyyy-MM-dd HH:mm:ss.SSS.
In my NiFi Flow I have a ValidateRecord processor that is using CSVReader for reading and JsonRecordSetWriter as writer. Both of them are using Avro schema with the timestamp field defined as
"fields" : [ {
"name" : "timestamp",
"type" : {
"type" : "long",
"logicalType" : "timestamp-millis"
},
"doc" : "Type inferred from '2016/10/08 07:51:00.000'"
}, {
...
When a record with a field value like 2016-10-08 07:51:00.000 is coming through, I'm getting an exception in the NiFi logs:
2018-10-18 17:05:59,135 ERROR [Timer-Driven Process Thread-1] o.a.n.processors.standard.ValidateRecord ValidateRecord[id=3d44915d-a52a-3eb0-1ae1-7b0cbe4b1a03] Failed to write MapRecord[{timestamp=2016-10-08 07:51:00.0, ... ] with schema {"type":"record","name":"redfunnel","doc":"Schema generated by Kite","fields":[{"name":"timestamp","type":{"type":"long","logicalType":"timestamp-millis"},"doc":"Type inferred from '2016/10/08 07:51:00.000'"},{ .... }]} as a JSON Object due to java.lang.IllegalStateException: No ObjectCodec defined for the generator, can only serialize simple wrapper types (type passed java.sql.Timestamp): java.lang.IllegalStateException: No ObjectCodec defined for the generator, can only serialize simple wrapper types (type passed java.sql.Timestamp)
java.lang.IllegalStateException: No ObjectCodec defined for the generator, can only serialize simple wrapper types (type passed java.sql.Timestamp)
at org.codehaus.jackson.impl.JsonGeneratorBase._writeSimpleObject(JsonGeneratorBase.java:556)
at org.codehaus.jackson.impl.JsonGeneratorBase.writeObject(JsonGeneratorBase.java:317)
at org.apache.nifi.json.WriteJsonResult.writeRawValue(WriteJsonResult.java:267)
at org.apache.nifi.json.WriteJsonResult.writeRecord(WriteJsonResult.java:201)
at org.apache.nifi.json.WriteJsonResult.writeRawRecord(WriteJsonResult.java:149)
at org.apache.nifi.processors.standard.ValidateRecord.onTrigger(ValidateRecord.java:342)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
In the properties of my JsonRecordSetWriter I've tries to specify a format for writing a Timestamp as yyyy-MM-dd HH:mm:ss.SSS
but unfortunately without success, I am still getting the same Exception in the NiFi logs.
Does that mean that JsonRecordSetWriter cannot serialize java.time.Timestamp by default even though it has Timestamp Format property for configuring seemingly exactly that?
Is it possible to write Timestamp according to a custom format using out-of-the-box NiFi components or I have to modify the JsonRecordSetWriter?
Update
Following up the code, my exception is getting thrown from this code branch.
It seems that it is a branch for invalid records that did not pass validation. Maybe my error occurs only on invalid records.
It seems that I have found a configuration that works in my case.
I had to split the schema into two: one for the input and another one for the output.
So, the schema1 defines the timestamp field as:
{
"name" : "timestamp",
"type" : "string",
"doc" : "Type inferred from '2016/10/08 07:51:00.000'"
}
and schema2 defines the timestamp field as
{
"name" : "timestamp",
"type" : {
"type" : "long",
"logicalType" : "timestamp-millis"
},
"doc" : "Type inferred from '2016/10/08 07:51:00.000'"
}
Now I am configuring the ValidateRecord processor with
CSVReader that uses schema1
JsonRecordSetWriter that uses schema2
The ValidateRecord's "Schema Text" field with schema1
After that the records pass my ValidateRecord processor without errors and land in the timestamp field of a Postgres database using PutDatabaseRecord processor that uses JsonTreeReader configured with schema2.
Important as well is to configure the JsonTreeReader's Timestamp format property with the correct string format, e.g. 'yyyy-MM-dd HH:mm:ss.SSS' in my case.
Hopefully that'll help in similar situation someone out there.
Let's say I have a table with one single field named "version", which is a string. When I try to load data into the table using autodetect with values like "1.1" or "1", the autodetect feature infers these values as float or integer type respectively.
data1.json example:
{ "version": "1.11.0" }
bq load output:
$ bq load --autodetect --schema_update_option=ALLOW_FIELD_ADDITION --source_format=NEWLINE_DELIMITED_JSON temp_test.temp_table ./data1.json
Upload complete.
Waiting on bqjob_ZZZ ... (1s) Current status: DONE
data2.json example:
{ "version": "1.11" }
bq load output:
$ bq load --autodetect --schema_update_option=ALLOW_FIELD_ADDITION --source_format=NEWLINE_DELIMITED_JSON temp_test.temp_table ./data2.json
Upload complete.
Waiting on bqjob_ZZZ ... (0s) Current status: DONE
BigQuery error in load operation: Error processing job 'YYY:bqjob_ZZZ': Invalid schema update. Field version has changed type from STRING to FLOAT
data3.json example:
{ "version": "1" }
bq load output:
$ bq load --autodetect --schema_update_option=ALLOW_FIELD_ADDITION --source_format=NEWLINE_DELIMITED_JSON temp_test.temp_table ./data3.json
Upload complete.
Waiting on bqjob_ZZZ ... (0s) Current status: DONE
BigQuery error in load operation: Error processing job 'YYY:bqjob_ZZZ': Invalid schema update. Field version has changed type from STRING to INTEGER
The scenario where this problem doesn't happen is when you have, in the same file, another JSON where the value is inferred correctly as string (as seen in Bigquery autoconverting fields in data question):
{ "version": "1.12" }
{ "version": "1.12.0" }
In the question listed above, there's an answer stating that a fix was pushed to production, but it looks like the bug is back again. Is there a way/workaround to prevent this?
Looks like the confusing part here is whether "1.12" should be detected as string or float. BigQuery chose to detect as float. Before autodetect is introduced in BigQuery, BigQuery allows users to load float values in string format. This is very common in CSV/JSON format. So when autodetect is introduced, it kept this behavior. Autodetect will scan up to 100 rows to detect the type. If for all 100 rows, the data is like "1.12", then very likely this field is a float value. If one of the row has value "1.12.0", then BigQuery will detect the type is string, as you have observed.
I am getting below dataweave exception while executing a mule flow :
"
INFO 2016-11-06 09:02:42,097 [[abc].HTTP_Listener_Configuration.worker.01] com.mulesoft.weave.mule.utils.MuleWeaveFactory$: MimeType was not resolved '*/*' delegating to Java.
ERROR 2016-11-06 09:02:42,290 [[abc].HTTP_Listener_Configuration.worker.01] org.mule.exception.CatchMessagingExceptionStrategy:
Message : Exception while executing:
"Response": {
^
Unexpected character '\u000a' at index 25 (line 2, position 24), expected '"'
Payload : test
Payload Type : java.lang.String
Element : /Process11/processors/9/1/9 # abc:def.xml:331 (TM_F1)
Element XML : <dw:transform-message doc:name="TM_F1">
<dw:set-payload>%dw 1.0%output application/json---{Data: [{// in_id : flowVars.instanceId,pd: '{AmIds:[{AmId:' ++ flowVars.AmId ++ '}]}'}]}</dw:set-payload>
</dw:transform-message>
Root Exception stack trace:
com.mulesoft.weave.reader.json.JsonReaderException: Unexpected character '\u000a' at index 25 (line 2, position 24), expected '"'
at com.mulesoft.weave.reader.json.JsonTokenizer.fail(JsonTokenizer.scala:193)
at com.mulesoft.weave.reader.json.JsonTokenizer.require(JsonTokenizer.scala:190)
at com.mulesoft.weave.reader.json.JsonTokenizer.readString(JsonTokenizer.scala:80)
"
Is there any possibility to enable more debug options to get more information about this particular exception so that it will be easy to find out the root cause.
The problem here is, even though i am not using the payload in transform message i am getting error because of the payload returned by the previous http call in muleflow.
Mule version is : studio 6.1 and runtime 3.8.
Please help me to solve this issue.
Thanks
sdg
This is not dataweave question. Exception what you have is from JsonReaderException:
com.mulesoft.weave.reader.json.JsonReaderException: Unexpected character '\u000a' at index 25 (line 2, position 24), expected '"'
It means that JSON what you provide has new line (\u000a) ate line 2 position 24. I imagine it is something like this:
"Response": {
"Message" : "67890123
456 the end"
}
Use special characters to represent new line in JSON.
"Response": {
"Message" : "67890123\n456 the end"
}
Enable info logs in log4j and enable debug logs at cloudhub if its an on cloud deployment.
Please Try validating the json as well
Debug is the best option to figure out these kind of errors. Also you may use the logger feature of dataweave to log specific values on console and see whats wrong with the value.
I have a job that I run with jobs().insert()
Currently I have the job failing with:
2014-11-11 11:19:15,937 - ERROR - bigquery - invalid: Could not convert value to string
Considering I have 500+ columns, I find this error message useless and pretty pathetic. What can I do to receive a proper and better error details from BigQuery?
The structured error return dictionary contains 3 elements, a "reason", a "location", and a "message". From the log line you include, it looks like only the message is logged.
Here's an example error return from a CSV import with data that doesn't match the target table schema:
"errors": [
{
"reason": "invalid",
"location": "File: 0 / Line:2 / Field:1",
"message": "Value cannot be converted to expected type."
},
...
Similar errors are returned from JSON imports with data that doesn't match the target table schema.
I hope this helps!