Datatype compatibility: int vs. nvarchar - sql

I am using below sql statement to read and import data from excel spreadsheet to database table:
INSERT INTO [dbo].[TestTable] ([ColumnA], [ColumnB], [ColumnC], [ColumnD])
SELECT A.[AAA], A.[BBB], A.[CCC], A.[DDD] FROM OPENROWSET
('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0;Database=C:\Temp\TestFile\01-TestFile.xlsx;HDR=YES', 'select * from [Sheet$]') AS A;
TestTable has four columns, only accepting integer datatypes. The import works well with numbers in the database spreadsheet. However, if for some reason there is a mistake in the file (i.e. there is text instead of digits), it fails and I get an error converting nvarchar to int datatype. Is there any way to ensure the upload still works - that text is ommitted and appears as NULL in the table?

Try TRY_CAST as shown below. Works for SQL Server 2012+.
SELECT TRY_CAST('TEST' AS INT)
See more on TRY_CAST (Transact-SQL)

Related

SQL - Error converting data type nvarchar to float

I am trying to convert data type of one of my columns (the table was imported from Excel), and then it shows an error
Error converting data type nvarchar to float
Code:
ALTER TABLE [dbo].[games_activity_2020$]
ALTER COLUMN [Version] float
What can I do differently?
probably there might be any character values inserted in the table while importing from excel. You can check those values using the below query
SELECT version
from [dbo].[games_activity_2020$]
where TRY_CONVERT(float, version) IS NULL
you can update those values and try to ALTER the table again
I suggest you run
SELECT * FROM [dbo].[games_activity_2020$]
WHERE TRY_CAST([Version] as FLOAT) IS NULL
and fix up any problematic values

Converting Oracle TIMESTAMP(6) TO SQL SERVER 2008 DATETIME2(6)

I am bulk importing a csv file to SQL server 2008, the csv file has been generated from exporting the table data from Oracle SQL developer.
The data for one column in that csv file is in TIMESTAMP(6) for which I am having the DATETIME2(6) datatype for the required column in the SQL server 2008.
I am importing the CSV file using the below statement
USE H_CLAIMS
GO
BULK INSERT H_CLAIMS.dbo.APPLICATION_QUEUES
FROM 'D:\MyWork\HC DB Work\HCAIDDB_CSV_EXPORTS\APPLICATION_QUEUES_export.CSV'
WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\n')
GO
while doing above I am getting the below error
Msg 4864, Level 16, State 1, Line 1
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 5 (CREATED_DATE).
Msg 4864, Level 16, State 1, Line 1
The sample data in the column mentioned in the error is like
21-NOV-14 08.57.51.565214000 AM
So I am looking for the answer, which can overcome this issue with any other attributes during the bulk insert statement or any convert function which can properly convert the datetime in the sample data to SQL SERVER 2008 datetime2 format.
SQL Server doesn't know how to convert the text value " 21-NOV-14 08.57.51.565214000 AM" to a DATETIME2 column. Try it in a query analyser window :
SELECT CAST('21-NOV-14 08.57.51.565214000 AM' AS DATETIME2(6))
Note that if you're using DATETIME2(6) it'll be loosing precision compared to what you're trying to import. Have a look at http://msdn.microsoft.com/en-GB/library/bb677335.aspx.
When I've had to do this coming from DB2 text files, I've done it two different ways.
Import the datetime field into a varchar then written a bit of SQL to manipulate the string into a format SQL Server can recognise, so something like. Bit slow and clunky, especially if you have a lot of data.
Use SSIS and create a transformation to do the string manipulation there. This has the advantage of still being able to bulk insert into the destination table, but does mean you need to be able to have access to integration services.
As I couldn't find any bulk Insert which will do the work for me, I have gone with a different approach. After many trails with cast and convert, I followed the below approach which is working as expected
I have created a function which can convert the oracle timestamp(6) to nvarchar of sql which can be directly inserted as datetime2(6) datatype in sql server 2008. Below is the function
Then I have used a stored procedure which can accept the file path as input parameter and a temp table to hold the nvarchar based datetime2 value. In the stored procedure I have used the dynamic bulk insert statement to insert into the required table. The procedure is after the function
CREATE FUNCTION DATETIMECONVERTER
(
#ORACLETIMESTAMP NVARCHAR(100)
)RETURNS nvarchar(100)
AS
BEGIN
DECLARE #convertedString nvarchar(100);
select #convertedString= replace(#ORACLETIMESTAMP,'.',':');
RETURN STUFF(#convertedString, CHARINDEX(':', #convertedString,18), 1, '.')
END
GO
CREATE PROCEDURE IMPORT_APPLICATION_ROLES #PATH varchar(1000)
AS
IF OBJECT_ID('H_CLAIMS.DBO.TEMP_APPLICATION_QUEUES', 'U') IS NOT NULL
DROP TABLE H_CLAIMS.DBO.TEMP_APPLICATION_ROLES
CREATE TABLE H_CLAIMS.DBO.TEMP_APPLICATION_ROLES
(
ROLE_ID INT NOT NULL,
ROLE_NAME NVARCHAR(255),
ROLE_DESC NVARCHAR(255),
CREATED_BY NVARCHAR(100),
CREATED_DATE NVARCHAR(100),
UPDATED_BY NVARCHAR(100),
UPDATED_DATE NVARCHAR(100)
)
DECLARE #bulkInsert NVARCHAR(4000) = 'BULK INSERT TEMP_APPLICATION_ROLES FROM ''' + #PATH + ''' WITH ( FIELDTERMINATOR ='','', ROWTERMINATOR =''\n'' )';
EXEC(#bulkInsert)
INSERT INTO APPLICATION_ROLES
(ROLE_ID,ROLE_NAME,ROLE_DESC,CREATED_BY,CREATED_DATE,UPDATED_BY,UPDATED_DATE)
SELECT ROLE_ID,ROLE_NAME,ROLE_DESC,CREATED_BY,dbo.DATETIMECONVERTER(CREATED_DATE)AS CREATED_DATE,
UPDATED_BY,dbo.DATETIMECONVERTER(UPDATED_DATE) AS UPDATED_DATE
FROM H_CLAIMS.dbo.TEMP_APPLICATION_ROLES
DROP TABLE H_CLAIMS.DBO.TEMP_APPLICATION_QUEUES
GO
to execute the statment I have used the below statement
EXEC H_CLAIMS.DBO.IMPORT_APPLICATION_QUEUES #PATH='D:\my_export.CSV';
Make sure to place the .csv files in the server machines drive while executing the stored procedure
I may be late to answer this but allow me to give you my workaround (if the precision doesn't really matter)
I import the timestamp from oracle table into SQL 2008 varchar then I update the varchar into a format that will fit for datetime2 then I alter the SQL table column to data type datetime2.
EG: in case you have time stamp like '01-JAN-15 12.00.00.000000000 AM +05:30'
update My_Table
set MyTimeStamp =
substring(MyTimeStamp, 1,10)+
REPLACE(substring(MyTimeStamp, 11, 8),'.',':')+
substring(MyTimeStamp, 19, 13)
where MyTimeStamp like '%.%.%.%';
alter table [My_Table] alter column MyTimeStamp DATETIME2;
Go;

error converting data types when importing excel file into sql server

As a SQL beginner, I am trying to import data from Excel to a table in sql server. I imported the data using sql import wizard. Since the wizard always defaut some of my numeric columns into nvarchar and won't allow me to change it in mapping, I planned to import the data into a temp table, then use INSERT with CAST function to transfer the data into the permanent target table.
When doing the insert, however, I got the error of 'error converting data type nvarchar to numeric'. anyone can tell me why and how to solve the issue? Here is my code:
INSERT INTO [DatabaseA].[dbo].[mstr_Project]
([Project_Start_Year]
,[Project_Name]
,[Client_Name]
,[Client_Revenue_in_Millions]
,[Client_Employee_Number])
SELECT [ProjectStartYear]
,[ProjectName]
,[ClientName]
,CAST([ClientRevenuesInMillions] AS NUMERIC)
,CAST([EmployeeNo] AS NUMERIC)
FROM [dbo].[temp_ProjectImport]
Thanks a million!!
you could do this convert(nvarchar(255), #col)
INSERT INTO [DatabaseA].[dbo].[mstr_Project]
([Project_Start_Year]
,[Project_Name]
,[Client_Name]
,[Client_Revenue_in_Millions]
,[Client_Employee_Number])
SELECT [ProjectStartYear]
,[ProjectName]
,[ClientName]
,CAST([ClientRevenuesInMillions] AS NUMERIC)
,CONVERT(NVARCHAR(255),[EmployeeNo] ) as new_converted_value
FROM [dbo].[temp_ProjectImport]

Finding character values outside ASCII range in an NVARCHAR column

Is there a simple way of finding rows in an Oracle table where a specific NVARCHAR2 column has one or more characters which wouldn't fit into the standard ASCII range?
(I'm building a warehousing and data extraction process which takes the Oracle data, drags it into SQL Server -- UCS-2 NVARCHAR -- and then exports it to a UTF-8 XML file. I'm pretty sure I'm doing all the translation properly, but I'd like to find a bunch of real data to test with that's more likely to cause problems.)
Not sure how to tackle this in Oracle, but here is something I've done in MS-SQL to deal with the same issue...
create table #temp (id int, descr nvarchar(200))
insert into #temp values(1,'Now is a good time')
insert into #temp values(2,'So is yesterday')
insert into #temp values(2,'But not '+NCHAR(2012))
select *
from #temp
where CAST(descr as varchar(200)) <> descr
drop table #temp
Sparky's example for SQL Server was enough to lead me to a pretty simple Oracle solution, once I'd found the handy ASCIISTR() function.
SELECT
*
FROM
test_table
WHERE
test_column != ASCIISTR(test_column)
...seems to find any data outside the standard 7-bit ASCII range, and appears to work for NVARCHAR2 and VARCHAR2.

Storing Symbols like ϱπΩ÷√νƞµΔϒᵨλθ→%° in SQL Server XML

I ran these quires in my SQL server
select cast('<Answers>
<AnswerDescription> ϱπΩ÷√νƞµΔϒᵨλθ→%° </AnswerDescription>
</Answers>' as xml)
select ' ϱπΩ÷√νƞµΔϒᵨλθ→%°'
And got the following results
<Answers>
<AnswerDescription> ?pO÷v??µ??????%° </AnswerDescription>
</Answers>
and
" ?pO÷v??µ??????%°"
How to make my SQL server store or display these values as they are being sent from Application ?
In SQL Server, scalar string values are cast to VARCHAR by default.
Your example can be made to work by indicating that the strings should be treated as NVARCHAR by adding N before the opening single quote:
select cast(N'<Answers>
<AnswerDescription> ϱπΩ÷√νƞµΔϒᵨλθ→%° </AnswerDescription>
</Answers>' as xml)
select N' ϱπΩ÷√νƞµΔϒᵨλθ→%°'
If these strings are being incorrectly stored in the database, it is likely that they are being implicitly cast to VARCHAR at some point during insertion (e.g. INSERT). It's also possible that they are being stored correctly and are cast to VARCHAR on retrieval (e.g. SELECT).
If you add some code to the question showing how you're inserting data and the datatypes of the target tables, it should be possible to provide more detailed assistance.
I believe its problem with incorectly set character set,
change charecter set to UTF8.
I just tested it on my MySQL database, i changed character set to utf8-bin using
ALTER TABLE `tab1` CHANGE `test` `test` VARCHAR( 255 ) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL
worked without any problem