Need Help in DTS.
After creating a table "allorders" with autodetect schema, I created a data transfer service. But when I ran the DTS I'm getting an error. see Job below. quantity field type is for sure set to integer and all the data in the said field are whole numbers.
Job bqts_602c3b1a-0000-24db-ba34-30fd38139ad0 (table allorders) failed
with error INVALID_ARGUMENT: Error while reading data, error message:
Could not parse 'quantity' as INT64 for field quantity (position 14)
starting at location 0 with message 'Unable to parse'; JobID:
956421367065:bqts_602c3b1a-0000-24db-ba34-30fd38139ad0
When I recreated a table and set all fields to type string. It worked fine. see Job below
Job bqts_607cef13-0000-2791-8888-001a114b79a8 (table allorders)
completed successfully. Number of records: 56017, with errors: 0.
Try to find unparseable values in the table with all string fileds:
SELECT *
FROM dataset.table
WHERE SAFE_CAST(value AS INT64) IS NULL;
I have the following table schema prepared by AWS glue
When I query the table using SELECT * FROM "vietnam-property-develop"."sell" limit 10;, it throws an error:
HIVE_BAD_DATA: Error parsing field value '{"area":"85
m²","date":"14/01/2020","datetime":"2020-01-18
00:42:28.488576+00:00","address":"Quan Hoa - Cầu Giấy","price":"20
Tỷ","cat":"Bán nhà mặt
phố","lon":"105.7976502","avatar":"","id":"24169794","title":"Chính
chủ cần bán nhà mặt phố nguyễn văn huyên Quan Hoa Cầu Giấy, 2 tầng, dt
85m2. LH 0903233723","lat":"21.0376771","room":"0"}' for field 4:
org.openx.data.jsonserde.json.JSONObject cannot be cast to
java.lang.Double
Then I tired to just query the title column by using SELECT title FROM "vietnam-property-develop"."sell" limit 10;
It returns result which I didn't expect. It seems that the query return the whole json files instead of just the title column. And the number of rows is 4 but not 10 no matter how I modify the query.
I'm querying a DB2 table (STG_TOOL) with 2 columns - T_L_ID - Integer, Name - VARCHAR(20).
SELECT T_L_ID, Name FROM STG_TOOL;
The query returns answer. However, the below query gives error.
SELECT T_L_ID, RTRIM(Name) FROM STG_TOOL;
This query gives error at 78th row.
DB2 Database Error: ERROR [42815] [IBM][DB2] SQL0171N The data type,
length or value of the argument for the parameter in position "1" of
routine "SYSIBM.RTRIM" is incorrect. Parameter name: "". 1 0
The reason identified is that Name in 78th row has a replacement character - '�'.
Even, the same query with a where clause gives us the error.
SELECT T_L_ID, RTRIM(Name) FROM STG_TOOL WHERE T_L_ID = 78;
The sample date on 78th rows is T_L_ID = 1040 & Name = 'test�'
The above mentioned error re-occurs for the above query.
What does the error implies? How can this be handled/solved?
Adding details to the post:
Version: DSN11010 (version 11)
OS: z/OS
Encoding: Unicode
Toad for DB2 is being used for querying. Toad version - 5.5
I am trying to import data from a flat file, my table has 3 columns,
id - int - auto_increment - primary key
profileID - int
taken - bit
This is what my flat file looks like:
profileID
79518
27835
26853
15052
11165
67092
57399
There is 10,000 rows
and I keep getting this error when I try in import
Data Flow Task 1: Data conversion failed. The data conversion for column "profileID" returned status value 6 and status text "Conversion failed because the data value overflowed the specified type.".
I set the Data Type as single-byte signed integer [DT_I1] for profileID for the flat file.
Please Help!
My input records:
4000001,Kristina,Chung,55,Pilot
4000002,Paige,Chen,74,Teacher
4000003,Sherri,Melton,34,Firefighter
4000004,Gretchen,Hill,66,Computer hardware engineer
4000005,Karen,Puckett,74,Lawyer
4000006,Patrick,Song,42,Veterinarian
4000007,Elsie,Hamilton,43,Pilot
4000008,Hazel,Bender,63,Carpenter
4000009,Malcolm,Wagner,39,Artist
4000010,Dolores,McLaughlin,60,Writer
4000011,Francis,McNamara,47,Therapist
4000012,Sandy,Raynor,26,Writer
4000013,Marion,Moon,41,Carpenter
4000014,Beth,Woodard,65,
4000015,Julia,Desai,49,Musician
4000016,Jerome,Wallace,52,Pharmacist
4000017,Neal,Lawrence,72,Computer support specialist
4000018,Jean,Griffin,45,Childcare worker
4000019,Kristine,Dougherty,63,Financial analyst
4000020,Crystal,Powers,67,Engineering technician
4000021,Alex,May,39,Environmental scientist
4000022,Eric,Steele,66,Doctor
4000023,Wesley,Teague,42,Carpenter
4000024,Franklin,Vick,28,Dancer
4000025,Claire,Gallagher,42,Musician
Pig commands:
cust = load '/input/custs' using PigStorage(',') as (custid:chararray, firstname:chararray, lastname:chararray,age:long,profession:chararray);
worked correctly
groupbyprofession = group cust by profession;
worked correctly
countbyprofession = foreach groupbyprofession generate group, COUNT(cust);
dump countbyprofession;
showing error:
ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1000: Error during parsing. Scalars can be only used with projection.