Failed to transfer data from GCS to Bigquery table - google-bigquery

Need Help in DTS.
After creating a table "allorders" with autodetect schema, I created a data transfer service. But when I ran the DTS I'm getting an error. see Job below. quantity field type is for sure set to integer and all the data in the said field are whole numbers.
Job bqts_602c3b1a-0000-24db-ba34-30fd38139ad0 (table allorders) failed
with error INVALID_ARGUMENT: Error while reading data, error message:
Could not parse 'quantity' as INT64 for field quantity (position 14)
starting at location 0 with message 'Unable to parse'; JobID:
956421367065:bqts_602c3b1a-0000-24db-ba34-30fd38139ad0
When I recreated a table and set all fields to type string. It worked fine. see Job below
Job bqts_607cef13-0000-2791-8888-001a114b79a8 (table allorders)
completed successfully. Number of records: 56017, with errors: 0.

Try to find unparseable values in the table with all string fileds:
SELECT *
FROM dataset.table
WHERE SAFE_CAST(value AS INT64) IS NULL;

Related

Table will not load into BigQuery

I've tried loading a table into BigQuery with no success. The error message I continue to get is attached below and I've tried manually entering my data along with letting Google determine my data as well and neither work.
Here is my error messages:
Error while reading data, error message: CSV table references column position 11, but line starting at position:606 contains only 1 columns.
Error while reading data, error message: CSV processing encountered too many errors, giving up. Rows: 0; errors: 1; max bad: 0; error percent: 0
And here is my schema:
Product_Type - String
Product_Name - String
Size - String
Manufacturer - String
SKU - String
NDC - String
Price - Float
UOM - String
Alt_UOM_Price - Float
Alt_UOM - String
Net_Price - Float
NEt_UOM - String
Try enabling Jagged rows when importing:

Why do I get runtime error DBSQL_DBSL_LENGTH_ERROR?

I have the following code:
DATA: lt_matnr TYPE TABLE OF mara,
ls_matnr TYPE mara,
lv_werk TYPE werks_d VALUE 'WERK',
lt_stoc_int TYPE TABLE OF zmm_s_stock_list,
lt_stoc TYPE TABLE OF zsd_stock_list,
ls_stoc TYPE zsd_stock_list.
SELECT matnr
FROM mara
INTO CORRESPONDING FIELDS OF TABLE lt_matnr.
LOOP AT lt_matnr INTO ls_matnr.
CALL FUNCTION 'Z_MM_LIST_STOC_MATERIAL_WERKS'
EXPORTING
IP_MATNR = ls_matnr-matnr
IP_WERKS = lv_werk
IMPORTING
ET_STOCK_EXP = lt_stoc_int.
LOOP AT lt_stoc_int ASSIGNING FIELD-SYMBOL(<ls_stoc_int>).
MOVE-CORRESPONDING <ls_stoc_int> TO ls_stoc.
* + other data processing ...
APPEND ls_stoc TO lt_stoc.
ENDLOOP.
ENDLOOP.
INSERT zsd_stock_list FROM TABLE lt_stoc.
Everything works fine until the INSERT statement where I get the following short-dump:
Runtime error: DBSQL_DBSL_LENGTH_ERROR
Exception: CX_SY_OPEN_SQL_DB
Error analysis:
An exception has occurred which is explained in more detail below. The
exception, which is assigned to class 'CX_SY_OPEN_SQL_DB' was not caught an
therefore caused a runtime error. The reason for the exception is:
While accessing a database, the length of a field in ABAP does not
match the size of the corresponding database field.
This can happen for example if a string is bound to a database field
that is shorter than the current string.
It makes little sense because lt_stoc is TYPE TABLE OF zsd_stock_list, how can the field length not match ?

How to resolve this sql error of schema_of_json

I need to find out the schema of a given JSON file, I see sql has schema_of_json function
and something like this works flawlessly
> SELECT schema_of_json('[{"col":0}]');
ARRAY<STRUCT<`col`: BIGINT>>
But if I query for my table name, it gives me the following error
>SELECT schema_of_json(Transaction) as json_data from table_name;
Error in SQL statement: AnalysisException: cannot resolve 'schemaofjson(`Transaction`)' due to data type mismatch: The input json should be a string literal and not null; however, got `Transaction`.; line 1 pos 7;
The Transaction is one of the columns in my table and after checking it manually I can attest that it is of String type(json).
The SQL statement has it to give me the schema of the JSON, how to do it?
after looking further into the documentation that it is clear that the word foldable means that of the static one, and a column from a table JSON won't work
for minimal reroducible example here you go:
SELECT schema_of_json(CAST('{ "a": "b" }' AS STRING))
As soon as the cast is introduced in the above statement, the schema_of_json will fail......... It needs a static JSON as it's input

Insert new timestamp value to acc table in kamailio

I want to add a new column to acc table. I created a new column in the acc table of type timestamp and named it ring_time. In every call I put the ring time to a $dlg_var like this:
$dlg_var(ringtime) = $Ts;
Then I add a extra column in config like this:
modparam("acc", "log_extra", "src_user=$fU;src_domain=$fd;src_ip=$si;" "dst_ouser=$tU;dst_user=$rU;dst_domain=$rd;ring_time=$dlg_var(ringtime)")
but when I try to test it, I always get:
db_mysql [km_dbase.c:122]: db_mysql_submit_query(): driver error on query: Incorrect datetime value: '1591361996' for column kamailio.acc.ring_time at row 1 (1292)
Jun 5 17:29:59 kamailio /usr/sbin/kamailio[22901]: ERROR: {2 102 INVITE 105a0f4a3d99a0a5558355e54b43f4e1#192.168.1.121:5060} <core> [db_query.c:244]: db_do_insert_cmd(): error while submitting query
Jun 5 17:29:59 kamailio /usr/sbin/kamailio[22901]: ERROR: {2 102 INVITE 105a0f4a3d99a0a5558355e54b43f4e1#192.168.1.121:5060} acc [acc.c:477]: acc_db_request(): failed to insert into database
Sounds like an error with the SQL INSERT query, if I had to guess I'd say you're being caught out by the date format in the SQL table not matching the date format you're pushing to it.
I don't know the structure of your database, but there's a simple trick I use for debugging SQL queries when I can't see the query being run;
Start up Wireshark/TCPdump on the machine and packet capture for all SQL traffic (MySQL is port 3306) and replicate the error.
From the packet capture and you'll be able to see the Query Kamailio's database engine ran.
If the error "db_mysql [km_dbase.c:122]: db_mysql_submit_query(): driver error on query: Incorrect datetime value: '1591361996' for column kamailio.acc.ring_time at row 1 (1292)", the '1591361996' looks like it is an epoch for the $dlg_var(ringtime). The "Incorrect datetime value" part of the error looks like the database is trying to store the value in datetime data type so a data type mismatch. Double-check and you may need either change the ringtime to convert to datetime or change the database column to a type that will take epoch.

AWS Athena: Error parsing field value and unexpected query results

I have the following table schema prepared by AWS glue
When I query the table using SELECT * FROM "vietnam-property-develop"."sell" limit 10;, it throws an error:
HIVE_BAD_DATA: Error parsing field value '{"area":"85
m²","date":"14/01/2020","datetime":"2020-01-18
00:42:28.488576+00:00","address":"Quan Hoa - Cầu Giấy","price":"20
Tỷ","cat":"Bán nhà mặt
phố","lon":"105.7976502","avatar":"","id":"24169794","title":"Chính
chủ cần bán nhà mặt phố nguyễn văn huyên Quan Hoa Cầu Giấy, 2 tầng, dt
85m2. LH 0903233723","lat":"21.0376771","room":"0"}' for field 4:
org.openx.data.jsonserde.json.JSONObject cannot be cast to
java.lang.Double
Then I tired to just query the title column by using SELECT title FROM "vietnam-property-develop"."sell" limit 10;
It returns result which I didn't expect. It seems that the query return the whole json files instead of just the title column. And the number of rows is 4 but not 10 no matter how I modify the query.