How to solve error gdk-05030 when importing csv data in SQL Developer - sql

I have a problem importing a CSV file using SQL Developer. I created a table to import a 'date' data by using the code below
CREATE TABLE DEPARTMENT (
DATECOLUMN DATE
);
I imported the CSV by right clicking and so on.
Interestingly, the CSV 'date' data has a format of 'YYYY-MM-DD', but when loaded in SQL developer (preview, before importing), it took the format of 'MM/DD/YYYY'.
I believe this is the problem, because when trying to import, it returned error "gdk-05030", meaning the data contains more than needed (to fit the format).

To me, it looks as follows:
date format in a CSV file is yyyy-mm-dd (as you said)
when you queried the table after insert, you found out that values look differently - mm/dd/yyyy
it is because current settings use that date format. If you want to change it, run
alter session set nls_date_format = 'yyyy-mm-dd';
and then select rows from the table.
alternatively, go to SQL Developer's Preferences, and set desired date format in its [Database - NLS] section.

Related

Importing CSV file but getting timestamp error

I'm trying to import CSV files into BigQuery and on any of the hourly reports I attempt to upload it gives the code
Error while reading data, error message: Could not parse 4/12/2016 12:00:00 AM as TIMESTAMP for field SleepDay (position 1) starting at location 65 with message Invalid time zone: AM
I get that the format is trying to use AM as a timezone and causing an error but I'm not sure how best to work around it. All of the hourly entries will have AM or PM after the date-time and that will be thousands of entries.
I'm using the autodetect for my schema and I believe that's where the issue is coming up, but I'm not sure what to put in the edit as text schema option to fix it
To successfully parse an imported string to timestamp in Bigquery, the string must be in the ISO 8601 format.
YYYY-MM-DDThh:mm:ss.sss
If your source data is not available in this format, then try the below approach.
Import the CSV into a temporary table by providing explicit schema, where timestamp fields are strings.
2. Select the data from the created temporary table, use the BigQuery PARSE_TIMESTAMP function as specified below and write to the permanent table.
INSERT INTO `example_project.example_dataset.permanent_table`
SELECT
PARSE_TIMESTAMP('%m/%d/%Y %H:%M:%S %p',time_stamp) as time_stamp,
value
FROM `example_project.example_dataset.temporary_table`;

Import/convert my date column from Excel into a datetime datatype in SQL Server?

I am importing an Excel file in CSV format. The file has 1,048,576 rows.
So far I have not been able to import the file with datetime datatype or convert/cast the columns with dates to datetime data types. I can only import the date column as a nvarchar, varchar or sql variant. The only time I can convert the column is if the file is under 71000 rows. Otherwise I get an error that the conversion from string is out of range or failed. (Please see below for pictures and more detail)
I have tried the following
Casting or converting the columns
Changing the data type via the table design
Importing with date time datatype
Copy and paste the data to a datetime column
Checking and converting the date column in excel, before importing
Batch Inserting the document
Importing as a text file
Removing null columns in the excel file
Importing as non string variable
Creating a temporary table
Making temporary variables with datetime data types
Notes
I am using the developer edition of sql server
I am trying to get the date columns(transaction_date, date_created) into a
date time datatype, I can only have them as a narvchar datatype currently
How I Import The CSV File
Right clicking database, clicking task and import flat file
Error Messages
Error message when trying to convert via table design
Error message when importing csv file to sql server
Sample of The CSV File
Sample of the CSV File Showing About 10 Rows
Database Schema
Brief Synopsis of Desired Database, Trying to Get Date Columns to DateTime Data type
Function SQLDate(d)
SQLDate = WorksheetFunction.Text(d, "yyyy-mm-dd hh:MM:ss")
End Function
Note: "d" refers to date field.
Alternatively, you could convert Excel data to string:
TEXT(d,"dd mmmm yyyy hh:mm:ss")
Then convert as req. from within SQL. For further clarity in this regard - see screenshot below.
Citation: Thanks to H Javed

date converting incorrectly with ssms import data wizard (Azure db)

I have update and delete permissions (not ALTER) for a Azure db table and am trying to import a CSV file using the SSMS import data wizard. The import works ok except that it changes the date in a field to today's date. For example the field on the CSV looks like this:
"Jun 01, 2018 01:37AM"
After it runs through the wizard it looks like this in my table:
2018-06-13 01:37:00.000
The datatype that I chose for the date in my CSV file during the import was:
database time [DT_DBTIME]
The datatype specified for that field in my table is:
datetime
Any idea what I'm doing wrong? Am I choosing the wrong datatype for the field in the csv?
It is because of the file data type (DT_DBTIME). This is not the correct datatype, you need to choose DT_DBTIMESTAMP to map correctly to datetime. I presume it is using the current date of the SQL server and is in effect only importing the time (01:37AM) element from your csv.

Importing date data from excel into table with right format conversion

I am importing data from excel files into SQL server database. In excel ,Date field are in the format mm-dd-yyyy. Whereas SQL database imports it as yyyy-mm-dd, swapping dates to months.For ex. in excel its 01-12-2018 but in SQL it imports as 2018-12-01. Hence the dates are incorrect.
Please let me know a way to import it correctly in SQL.
I am happy to import in both ways : yyyy-mm-dd or yyyy-dd-mm. I just want it to read the date correctly form excel file.
I am using import export wizard.
Updated answer:
OP mentioned in another answer comment
I am currently using import export wizard to import data.
To solve problem in Import export wizard.
Choose Flat file source - I used a sample file (sample.csv) with this data
id, dates, text
1,08-09-2018,"sample"
2,05-09-2019,"more sample"
Under Choose your data source I went to Advanced tab and ensured that dates are imported as string.
At the Select Source Table and Views step, go into Edit Mappings>Edit SQL and then change the table creation query to have an extra calculated column (say cdates) with definition like [cdates] as convert(date,[ dates],110). As you can see I added this in my SQL as last column in definition.
In the case that you don't create table but insert into existing table. Alter the table to have a calculated column over the varchar date column.
See the output I got
Original Answer:
You should import the dates as nvarchar(10) into the table and then cast them after they have been imported.
Typically nvarchar type structure is followed for all purpose and is also called staging tables. In staging tables most of the relevant mistrusted data fields are of nvarchar(N) type allowing them to be successfully imported into SQL server.
After import, you should take data from staging tables into desired tables using properly casted/converted columns in a MERGE or UP-SERT query.
In you case you should use explicit convert like
CONVERT(date,your_staging_date_column ,110)
If data is set as Date in Excel, you can convert it to nvarchar, and then back to date as follows:
CONVERT(date, convert(nvarchar, [date from excel]), 110);
110 in the end means that it will convert from format mm-dd-yyyy as your data is currently formatted in Excel.
If you are reading data from Excel as pure text, then it will be enough to convert that text to date:
CONVERT(date, [date from excel], 110);
This function will convert data to correct format in your table.

In PostgreSQL, what's data type you pass to a create table call when dealing with timestamp values?

When creating a table how do you deal with a timestamp in csv file that has the following syntax - MM/DD/YY HH:MI? Here's an example: 1/1/16 19:00
I have tried the following script in PostgreSQL:
create table timetable (
time timestamp
);
copy table from '<path>' delimiter ',' CSV;
But, I receive an error message saying:
ERROR: ERROR: invalid input syntax for type timestamp:
"visit_datetime" Where: COPY air_reserve, line 16, column
visit_datetime: "visit_datetime"
One solution I have considered is first creating the timestamp column in char then run a separate query that converts it to the appropriate timestamp datatype using the function call 'to_char(time, MM/DD/YY HH:MI). But, I'm looking for a solution that would enable to load the data in the correct datatype in a single query.
You may find a datestyle that enables you to load the data you have, but sooner or later someone will deliver to you something that doesn't fit.
The solution you have considered is probably the best.
We use this as a standard pattern for loading data warehouses. We take today's data, load it into a staging table using varchar columns for any data that will not load directly into its target data type. We then run whatever scripts we need to to get the data into a good state, raising warnings for anything that is broken in a way we haven't seen before. Then we add the cleaned version of today's data into the table containing cleaned data for all previous days.
We don't mind if this takes several steps; we put them all in a script and run it as an automated job.
I'm working on documenting the techniques we use. You can see the beginnings of this at http://www.thedatastudio.net.