Loading CSV data containing string and numeric format to Ignite is failing - ignite

I am evaluating Ignite and trying to load CSV data to Apache Ignite. I have created a table in Ignite:
jdbc:ignite:thin://127.0.0.1/> create table if not exists SAMPLE_DATA_PK(SID varchar(30),id_status varchar(50), active varchar, count_opening int,count_updated int,ID_caller varchar(50),opened_time varchar(50),created_at varchar(50),type_contact varchar, location varchar,support_incharge varchar,pk varchar(10) primary key);
I tried to load data to this table with command:
copy from '/home/kkn/data/sample_data_pk.csv' into SAMPLE_DATA_PK(SID,ID_status,active,count_opening,count_updated,ID_caller,opened_time,created_at,type_contact,location,support_incharge,pk) format csv;
But the data load is failing with this error:
Error: Server error: class org.apache.ignite.internal.processors.query.IgniteSQLException: Value conversion failed [column=COUNT_OPENING, from=java.lang.String, to=java.lang.Integer] (state=50000,code=1)
java.sql.SQLException: Server error: class org.apache.ignite.internal.processors.query.IgniteSQLException: Value conversion failed [column=COUNT_OPENING, from=java.lang.String, to=java.lang.Integer]
at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1009)
at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.sendFile(JdbcThinStatement.java:336)
at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:243)
at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:560)
at sqlline.Commands.executeSingleQuery(Commands.java:1054)
at sqlline.Commands.execute(Commands.java:1003)
at sqlline.Commands.sql(Commands.java:967)
at sqlline.SqlLine.dispatch(SqlLine.java:734)
at sqlline.SqlLine.begin(SqlLine.java:541)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)
Below is the sample data I am trying to load:
SID|ID_status|active|count_opening|count_updated|ID_caller|opened_time|created_at|type_contact|location|support_incharge|pk
|---|---------|------|-------------|-------------|---------|-----------|----------|------------|--------|----------------|--|
INC0000045|New|true|1000|0|Caller2403|29-02-2016 01:16|29-02-2016 01:23|Phone|Location143||1
INC0000045|Resolved|true|0|3|Caller2403|29-02-2016 01:16|29-02-2016 01:23|Phone|Location143||2
INC0000045|Closed|false|0|1|Caller2403|29-02-2016 01:16|29-02-2016 01:23|Phone|Location143||3
INC0000047|Active|true|0|1|Caller2403|29-02-2016 04:40|29-02-2016 04:57|Phone|Location165||4
INC0000047|Active|true|0|2|Caller2403|29-02-2016 04:40|29-02-2016 04:57|Phone|Location165||5
INC0000047|Active|true|0|489|Caller2403|29-02-2016 04:40|29-02-2016 04:57|Phone|Location165||6
INC0000047|Active|true|0|5|Caller2403|29-02-2016 04:40|29-02-2016 04:57|Phone|Location165||7
INC0000047|AwaitingUserInfo|true|0|6|Caller2403|29-02-2016 04:40|29-02-2016 04:57|Phone|Location165||8
INC0000047|Closed|false|0|8|Caller2403|29-02-2016 04:40|29-02-2016 04:57|Phone|Location165||9
INC0000057|New|true|0|0|Caller4416|29-02-2016 06:10||Phone|Location204||10
Need help to understand how to figure out what is the issue and resolve it

You have to upload CSV without header line. Which contains the column names. An error is thrown when trying to convert the string value "count_opening" to a Integer.

Related

Synapse polybase data ingestion is not working

I have a task to convert the jobs from synapse bulk insert to synapse polybase pattern. As part of that I see that it doesn't work straight away. It is complaining about some datatypes etc as below.... where as there is no double datatypes sometimes in the source query. Please help to understand if there a basic pattern or casting we need to do before we use polybase.
Here the source SQL I used
SELECT TOP (1000) cast([SiteCode_SourceId] as varchar(1000))
[SiteCode_SourceId]
,cast([EquipmentCode_SourceId] as varchar(1000))
[EquipmentCode_SourceId]
,FORMAT([RecordedAt],'yyyy-MM-dd HH:mm:ss.fffffff') AS
[RecordedAt]
,cast([DataLineage_SK] as varchar(1000)) [DataLineage_SK]
,cast([DataQuality_SK] AS varchar(1000)) [DataQuality_SK]
,cast([FixedPlantAsset_SK] as varchar(1000))
[FixedPlantAsset_SK]
,cast([ProductionTimeOfDay_SK] as varchar(1000))
[ProductionTimeOfDay_SK]
,cast([ProductionType_SK] as varchar(1000)) [ProductionType_SK]
,cast([Shift_SK] as varchar(1000)) [Shift_SK]
,cast([Site_SK] as varchar(1000)) [Site_SK]
,cast([tBelt] as varchar(1000)) [tBelt]
,FORMAT([ModifiedAt],'yyyy-MM-dd HH:mm:ss.fffffff') [ModifiedAt]
,FORMAT([SourceUpdatedAt],'yyyy-MM-dd HH:mm:ss.fffffff')
[SourceUpdatedAt]
FROM [ORXX].[public_XX].[fact_FixedXXXX]
Operation on target cp_data_movement failed: ErrorCode=PolybaseOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error happened when loading data into SQL Data Warehouse. Operation: 'Polybase operation'.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message=HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: ClassCastException: class parquet.io.api.Binary$ByteArraySliceBackedBinary cannot be cast to class java.lang.Double (parquet.io.api.Binary$ByteArraySliceBackedBinary is in unnamed module of loader 'app'; java.lang.Double is in module java.base of loader 'bootstrap'),Source=.Net SqlClient Data Provider,SqlErrorNumber=106000,Class=16,ErrorCode=-2146232060,State=1,Errors=[{Class=16,Number=106000,State=1,Message=HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: ClassCastException: class parquet.io.api.Binary$ByteArraySliceBackedBinary cannot be cast to class java.lang.Double (parquet.io.api.Binary$ByteArraySliceBackedBinary is in unnamed module of loader 'app'; java.lang.Double is in module java.base of loader 'bootstrap'),},],'
Reasons for this error can be,
Order of the columns in the target table is not matching with the source table. So, there will be data type mismatch
Data types in parquet file is Incompatible to target tables' data type.
Solution:
Make sure the order of the columns are same as in parquet staging file.
Keep the same data types in source columns and target columns.

I am unable to copy sample data to populate a table Coginity pro from Redshift

I have been trying to copy data to a table in my Coginity Pro but I get the error message below .
I have copied my ARN from redshift and pasted it in the relevant path but I still could not populate the sample data to the tables already created in coginity Pro
below is the error message
Status: ERROR
copy users from 's3://awssampledbuswest2/tickit/allusers_pipe.txt'
credentials 'aws_iam_role='
delimiter '|' region 'us-west-2'
36ms 2022-11-28T02:23:51.059Z
(SQLSTATE: 08006, SQLCODE: 0): An I/O error occurred while sending to the backend.
#udemeribe . Please check STL_LOAD_ERRORS ( order by date_field(starttime)) table

what is the alternative for double datatype from spark sql(Databricks) to Sql Server Data warehouse

I have to load the data from azure datalake to data warehouse.I have created set up for creating external tables.there is one column which is double datatype, i have used decimal type in sql server data warehouse for creating the external table and file format is parquet.But using csv it is working.
i'm getting the following error.
HdfsBridge::recordReaderFillBuffer - Unexpected error encountered
filling record reader buffer: ClassCastException: class
java.lang.Double cannot be cast to class parquet.io.api.Binary
(java.lang.Double is in module java.base of loader 'bootstrap';
parquet.io.api.Binary is in unnamed module of loader 'app'.
Can some one help me on this issue?
Thanks in advance.
CREATE EXTERNAL TABLE [dbo].[EXT_TEST1]
( A VARCHAR(10),B decimal(36,19)))
(DATA_SOURCE = [Azure_Datalake],LOCATION = N'/A/B/PARQUET/*.parquet/',FILE_FORMAT =parquetfileformat,REJECT_TYPE = VALUE,REJECT_VALUE = 1)
Column datatype in databricks:
A string,B double
Data: A | B
'a' 100.0050
Use float(53) which is of 53 digits precision and 8 bytes length.

SSIS XML Source Error - Input string was not in a correct format

I have an attribute tlost with the definition below in the XSD file. I have tried both use="required" and use="optional".
<xs:attributeGroup name="defense">
<xs:attribute name="tlost" use="required" type="xs:decimal"/>
</xs:attributeGroup>
In the XML document I am trying to import I will get a value like the following:
<defense ast="0" category="special_team" tlost="0" int="0"/>
I am executing an SSIS package that takes the tlost value and inserts it into a sql database table. The column in the database table has a datatype of DECIMAL(28,10) and allows nulls.
When I execute the package, the previous values work perfectly and the data is inserted. However, when I get a value where tlost="" in the XML file, the package fails and the record is not inserted.
In the data flow path editor, the data type for tlost is DT_DECIMAL. When I check the Advanced Editor for the XML Source, the Input and Output properties have a data type for tlost as decimal [DT_DECIMAL].
I can't figure out why this is failing. I tried to create a derived column and cast it as a (DT_DECIMAL, 10) data type. That didn't work. I tried to check for a null value and replace with 0 if null, that didn't work. So I just ignored the column all together and in the Derived Column task, I replaced the tlost column value with (DT_DECIMAL, 10) 0 to just insert a 0 value and ignore whatever is in the xml file, and the job still failed with the following error message:
Error: 0xC020F444 at Load Play Summary Tables, XML Source [1031]: The error "Input string was not in a correct format." occurred while processing "XML Source.Outputs[defense].Columns[tlost]".
Error: 0xC02090FB at Load Play Summary Tables, XML Source [1031]: The "XML Source" failed because error code 0x80131537 occurred, and the error row disposition on "XML Source.Outputs[defense].Columns[tlost]" at "XML Source.Outputs[defense]" specifies failure on error. An error occurred on the specified object of the specified component.
Error: 0xC02092AF at Load Play Summary Tables, XML Source [1031]: The XML Source was unable to process the XML data. Pipeline component has returned HRESULT error code 0xC02090FB from a method call.
Error: 0xC0047038 at Load Play Summary Tables, SSIS.Pipeline: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on XML Source returned error code 0xC02092AF. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
Please help. I have exhausted everything I can think of to fix this issue. I am processing hundreds of files, and I can't keep fixing bad data files every time this issue occurs.
Can you please try these
1 - Change to data type to string in xsd and before loading into tables take care of data type conversion.
2 - If possible generate the xsd by passing your xml and then verify the data type and use it accordingly ...
rest of the xsd can be changed accordingly...
below is screen grab of what I tried. hope it helps]1

SSIS export to CSV file failing

I am trying to export the contents of a SQL Server 2005 table to a csv file using SSIS. In the Data Flow Task I have a OLE DB Source for the table and a Flat File Destination for the file.
When copying the data I started getting a failure on one of the column on a certain row and following some investigation found the problem was with comma's in the data below
Data Issue (nvarchar255)
errors code l075 showing,,,re test.
OLE DB Source for Comment col
Derived Column
Given that this was the issue I created a Derived Column object between the source and destination and destination objects and tried filtering out the comma's using a replace REPLACE(Comment,","," ") but the same column is still failing with the below errors.
Destination Component
Exception
[Inspection Failures Destination [206]] Error: Data conversion failed.
The data conversion for column "Comment" returned status value 4 and
status text "Text was truncated or one or more characters had no
match in the target code page.".
[Inspection Failures Destination [206]] Error: Cannot copy
or convert flat file data for column "Comment".
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED.
The ProcessInput method on component "Inspection Failures
Destination" (206) failed with error code 0xC02020A0 while
processing input "Flat File Destination Input" (207). The
identified component returned an error from the ProcessInput
method. The error is specific to the component, but the error
is fatal and will cause the Data Flow task to stop running.
There may be error messages posted before this with more
information about the failure.
[Inspecton Failures Source [128]] Error: The attempt to
add a row to the Data Flow task buffer failed with error
code 0xC0047020.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.
The PrimeOutput method on component "Inspecton Failures Source"
(128) returned error code 0xC02020C4. The component returned
a failure code when the pipeline engine called PrimeOutput().
The meaning of the failure code is defined by the component,
but the error is fatal and the pipeline stopped executing.
There may be error messages posted before this with more
information about the failure.
Ok, the problem actually appears to be a hidden illegal character in the text
In the image below the top line shows a square before the re test string. The comment column in the database is an nvarchar which apparently uses a different character set so I can not just use the CHAR(13) + CHAR(10) to replace the carriage return.
The fix involved converting the field from an nvarchar to a varchar then performing a replace on the converter ? character resulting in the corrected second ling in the image
SELECT ID,
REPLACE(REPLACE(CAST(Comment AS varchar(255)),'?',' '),',',' ') Comment
FROM tblInspectionFailures WHERE (ID = 216899)
The conversion requirement is detailed here
This does not should like an ideal solution to me but it does work. Does anyone have any other options.
Without replacing comment column can you create another column and map the new derived column to destination column and see.