BULK insert error - sql

I am trying to BULK insert from .csv file and i get the following error:
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 2, column 23 (AR).
Msg 4864, Level 16, State 1, Line 4
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 3, column 23 (AR).
When i open the CSV file in Microsoft excel on row 2 column23 its just the number '0'.
So if i go manually in my database table and i insert the number 0 in the column AR it accepts it without any problems. I do not understand why this happens. Any help?

I assume your code looks something like this
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(destinationConnection))
{
// Create a reader somehow
IDataReader reader = new ... // <- Your problem will be here
bulkCopy.WriteToServer(reader);
}
In your reader you need to read the file according to it's type and encoding.
According to your file type you need to select the correct encoding from
System.Text.Encodig

Related

I want to read data out by linked server in SSMS

Msg 7356, Level 16, State 1, Line 1
The OLE DB provider "MSDASQL" for linked server "" supplied inconsistent metadata for a column. The column "" (compile-time ordinal 11) of object "" was reported to have a "DBCOLUMNFLAGS_ISLONG" of 128 at compile time and 0 at run time.
And O also got these message when I just want read out a single column and not that one which has some problem.
I try to use SELECT * from .... reading form and
SELECT * FROM OPENQUERY([], '') also.

trying to import csv file to table in sql

I have 4 csv files each having 500,000 rows. I am trying to import the csv data into my Exasol databse, but there is an error with the date column and I have a problem with the first unwanted column in the files.
Here is an example CSV file:
unnamed:0 , time, lat, lon, nobs_cloud_day
0, 2006-03-30, 24.125, -119.375, 22.0
1, 2006-03-30, 24.125, -119.125, 25.0
The table I created to import csv to is
CREATE TABLE cloud_coverage_CONUS (
index_cloud DECIMAL(10,0)
,"time" DATE -- PRIMARY KEY
,lat DECIMAL(10,6)
,lon DECIMAL(10,6)
,nobs_cloud_day DECIMAL (3,1)
)
The command to import is
IMPORT INTO cloud_coverage_CONUS FROM LOCAL CSV FILE 'D:\uni\BI\project 1\AOL_DB_ANALYSIS_TASK1\datasets\cloud\cfc_us_part0.csv';
But I get this error:
SQL Error [42636]: java.sql.SQLException: ETL-3050: [Column=0 Row=0] [Transformation of value='Unnamed: 0' failed - invalid character value for cast; Value: 'Unnamed: 0'] (Session: 1750854753345597339) while executing '/* add path to the 4 csv files, that are in the cloud database folder*/ IMPORT INTO cloud_coverage_CONUS FROM CSV AT 'https://27.1.0.10:59205' FILE 'e12a96a6-a98f-4c0a-963a-e5dad7319fd5' ;'; 04509 java.sql.SQLException: java.net.SocketException: Connection reset by peer: socket write error
Alternatively I use this table (without the first column):
CREATE TABLE cloud_coverage_CONUS (
"time" DATE -- PRIMARY KEY
,lat DECIMAL(10,6)
,lon DECIMAL(10,6)
,nobs_cloud_day DECIMAL (3,1)
)
And use this import code:
IMPORT INTO cloud_coverage_CONUS FROM LOCAL CSV FILE 'D:\uni\BI\project 1\AOL_DB_ANALYSIS_TASK1\datasets\cloud\cfc_us_part0.csv'(2 FORMAT='YYYY-MM-DD', 3 .. 5);
But I still get this error:
SQL Error [42636]: java.sql.SQLException: ETL-3052: [Column=0 Row=0] [Transformation of value='time' failed - invalid value for YYYY format token; Value: 'time' Format: 'YYYY-MM-DD'] (Session: 1750854753345597339) while executing '/* add path to the 4 csv files, that are in the cloud database folder*/ IMPORT INTO cloud_coverage_CONUS FROM CSV AT 'https://27.1.0.10:60350' FILE '22c64219-cd10-4c35-9e81-018d20146222' (2 FORMAT='YYYY-MM-DD', 3 .. 5);'; 04509 java.sql.SQLException: java.net.SocketException: Connection reset by peer: socket write error
(I actually do want to ignore the first column in the files.)
How can I solve this issue?
Solution:
IMPORT INTO cloud_coverage_CONUS FROM LOCAL CSV FILE 'D:\uni\BI\project 1\AOL_DB_ANALYSIS_TASK1\datasets\cloud\cfc_us_part0.csv' (2 .. 5) ROW SEPARATOR = 'CRLF' COLUMN SEPARATOR = ',' SKIP = 1;
I did not realise that mysql is different from exasol
Looking at the first error message, a few things stand out. First we see this:
[Column=0 Row=0]
This tells us the problem is with the very first value in the file. This brings us to the next thing, where the message even tells us what value was read:
Transformation of value='Unnamed: 0' failed
So it's failing to convert Unnamed: 0. You also provided the table definition, where we see the first column in the table is a decimal type.
This makes sense. Unnamed: 0 is not a decimal. For this to work, the CSV data MUST align with the data types for the columns in the table.
But we also see this looks like a header row. Assuming everything else matches we can fix it by telling the database to skip this first row. I'm not familiar with Exasol, but according to the documentation I believe the correct code will look like this:
IMPORT INTO cloud_coverage_CONUS
FROM LOCAL CSV FILE 'D:\uni\BI\project 1\AOL_DB_ANALYSIS_TASK1\datasets\cloud\cfc_us_part0.csv'
(2 FORMAT='YYYY-MM-DD', 3 .. 5)
ROW SEPARATOR = 'CRLF'
COLUMN SEPARATOR = ','
SKIP = 1;

REPLACE statement produces "Data truncation" error. Please advise why and how to correct it

UPDATE ASSIGNMENTS SET CBTURL = REPLACE(CBTURL, 'http://172.21.130.19/', 'https://testlpsweb.corp.mbll.ca/Content/')
The above statement produces "Data truncation" error. Please advise why and how to correct it.
Error starting at line : 1 in command - UPDATE ASSIGNMENTS SET CBTURL = REPLACE(CBTURL, 'http://172.21.130.19/', 'https://testlpsweb.corp.mbll.ca/Content/') Error at Command Line : 1 Column : 1 Error report - SQL Error: Data truncation
I'm guessing that CBTURL column length is to small for resulting string of replace. Could you try to alter column to have larger lenght.
Try this query to see maximum resulst string lenght:
Select Max(Len(REPLACE(CBTURL, 'http://172.21.130.19/', 'https://testlpsweb.corp.mbll.ca/Content/'))) from tablename ....

Import date dd/mm from txt to table with SQL

I'm a total beginner and already searched all over the place, so please bear with me.
I have a txt file with this kind of data (DD/MM) and ; as delimiters:
01/10;10/06;15/11;10/07
01/10;10/06;15/11;10/07
01/11;20/06;10/11;30/07
01/11;20/06;10/11;30/07
10/11;20/06;20/01;30/07
01/10;01/06;15/11;30/06
Firstly, I set datestyle to European;
So I have DateStyle - "ISO, DMY".
After, I tried to import this data into some of the columns of the pheno table (see code below), using postgresql:
COPY pheno(planting_onset, harvesting_onset, planting_end, harvesting_end)
FROM '/home/user/Documents/worldcrops/algeria_times.txt' DELIMITERS ';';
And gave the following error:
ERROR: invalid input syntax for type date: "01/10"
CONTEXT: COPY pheno, line 1, column planting_onset: "01/10"
********** Error **********
ERROR: invalid input syntax for type date: "01/10"
SQL state: 22007
Context: COPY pheno, line 1, column planting_onset: "01/10"
Questions: How do I copy this data type DD/MM into a table which columns have date as "data type"? Should I change the columns "data type"?
Thanks in advance.
It's expecting DMY but you're only giving it days and months. This is kind of hacky but i think it should work:
ALTER TABLE pheno
ADD planting_onset_temp VARCHAR(16),
harvesting_onset_temp VARCHAR(16),
planting_end_temp VARCHAR(16),
harvesting_end_temp VARCHAR(16);
COPY pheno(planting_onset_temp, harvesting_onset_temp, planting_end_temp, harvesting_end_temp) FROM '/home/user/Documents/worldcrops/algeria_times.txt' DELIMITERS ';';
UPDATE pheno
SET planting_onset = CONCAT(planting_onset_temp, '/2016'),
harvesting_onset = CONCAT(harvesting_onset_temp, '/2016'),
planting_end = CONCAT(planting_end_temp, '/2016'),
harvesting_end = CONCAT(harvesting_end_temp, '/2016');
ALTER TABLE pheno DROP COLUMN planting_onset_temp, harvesting_onset_temp, planting_end_temp, harvesting_end_temp;
Replace '/2016' with whatever year is relevant.

Derby DB Insert Error SQL state 21000

I am attempting to insert data into a table in my database. I am using an Oracle Apache Derby DB. I have the following code-
Insert into P2K_DBA.ODS_CNTRL
(ODS_LOAD_ID, ODS_STATUS, USR_WWID, USR_FIRST_NM,
USR_LAST_NM, USR_DISPLAY_NM, USR_NT_ID,TOT_AMT,
TOT_RCD_CNT, TOT_QTY, LAST_UPD_DT, ODS_ADJ_TYP,
ODS_ADJ_DESC, APRV_WWID, APRV_FIRST_NM,APRV_LAST_NM,
APRV_DISPLAY_NM, APRV_NT_ID, APRV_DT
)
values
(6,'avail','64300339', 'Travis',
'Taylor', 'TT', '3339', 33,
15, 40, '7/10/2012', 'test',
'test', '64300337', 'Travis',
'Taylor', 'TT', '3339', '2/06/2013');
I ran this SQL command and received the following error-
"Error code -1, SQL state 21000: Scalar subquery is only allowed to return a single row.
Line 1, column 1"
I have ran this code successfully a few days ago. On top of that I have tried to manually enter in data in this table (using NetBeans) and have it auto generate the code, which resulted in the same error.
What is causing this error and how can I solve/bypass it?
One way in which you could run into this would be to do something like
CREATE FUNCTION F(...) ...
F((SELECT COL FROM T))
But you could instead write
... (SELECT F(COL) FROM T) provided the new context permits a subquery, that is.