I am inserting data to tables using dynamic SQL:
SET #Profiles = N'SELECT ''' + #var1 + ''' as col1, col2, col3, col4, col5, col6, col7
FROM ' + #TableName + 'tbl1'
INSERT INTO
table (col1, col2, col3, col4, col5, col6, col7)
EXEC (#Profiles)
Above query is in stored procedure, which is run by job.
I noticed when job is running the data with Japanese characters are inserted properly but when job is completed and I make select on inserted table it returns '?' instead of Japanese characters. I am using collation SQL_Latin1_General_CP1_CI_AS and my column data type is nvarchar. I also tried to change the collation but even with Japanese collation it returns '?'. Do you know how I can handle this?
EDIT 1
I forgot to add that the stored procedure resides in SSIS package. Maybe it can help.
I found what was wrong. I have a function at last step of the SSIS package which removes non-printable white characters and this function parameter was varchar instead of nvarchar. I changed it to nvarchar and it is ok now.
Related
is Multi Statement Request more peroformant than multiple separated request in teradata ?
I have a mainframe job that lunch a bteq script that is actually Multi Statement Request as described in the example below :
insert into table (col1, col2, col3) values (val1,val2,val3)
; insert into table (col1, col2, col3) values (val4,val5,val6)
; insert into table (col1, col2, col3) values (val7,val8,val9);
my question is should I keep this one job for the Multi Statement Request or separe it into multipe jobs for each insert ? which way is more performant ?
Thanks in advance.
If you are using BTEQ you can do a batch/bulk insert operation using the .REPEAT/PACK command. An example:
.set sessions 5
.logon ...
.import vartext ',' file = \\your\file\path\somefile.csv;
.repeat * pack 100
using (val1 integer, val2 varchar(20),val3 varchar(10))
insert into table (col1, col2, col3)
values(val1, val2, val3);
Even better is using a proper utility like fastload or TPT, but short of that any way you can cram your inserts into a single request the better off you are.
I've an external table in Hive
Current:
col1 - string
col2 - string
col3 - string
col4 - float
col5 - int
I want to change the date type of col3 to date
Expected:
col1 - string
col2 - string
col3 - date
col4 - float
col5 - int
I tried regular sql command but not useful
alter table table_name modify col3 date;
Error:
Error while compiling statement: FAILED: ParseException line 1:32 cannot recognize input near 'modify' 'col3' 'date' in alter table statement
Requesting assistance here. Thanks in advance.
Correct command is:
alter table table_name change col3 col3 date;
The column change command will only modify Hive's metadata, and will
not modify data. Users should make sure the actual data layout of the
table/partition conforms with the metadata definition.
See syntax and manual here: Change Column Name/Type/Position/Comment
im using mssql 2008 and Im permanently failing to convert an nvarchar to numeric values.
Can you please advise? I have different solutions I found over the www, but all of them are failing with the error message:
Msg 8114, Level 16, State 5, Line 15 Error converting data type
nvarchar to numeric.
I have built an reduced example for demonstration purpose:
IF OBJECT_ID('tempdb..#temptable', 'U') IS NOT NULL
DROP TABLE dbo.#temptable
create table #temptable(
col1 nvarchar(10),
col2 numeric(10,5)
)
insert into #temptable values ('0,5','0')
select *,convert(numeric(18,2),col1) from #temptable
UPDATE #temptable
SET col2 = CAST(col1 AS numeric(10,5))
WHERE ISNUMERIC(col1) = 1
SELECT col1
, CASE ISNUMERIC(col1)
WHEN 1 THEN CONVERT(numeric(18,2),col1)
ELSE 0.00
END
from #temptable
I alreay found an strong hint whats going wrong... the issue seems to be related to the , as seperator while the SQL server expects an .
if you change the following line to:
insert into #temptable values ('0.5','0')
its working
The problem is you are using ISNUMERIC(col1) = 1 which fails for a ton of cases like ISNUMERIC('1e4') or ISNUMERIC('$') or in your case, ISNUMERIC('1,000,000'). Don't use ISNUMERIC in this fashion.
Instead, try this...
UPDATE #temptable
SET col2 = CAST(col1 AS numeric(10,5))
WHERE col1 not like '%[^0-9.]%'
Use try_convert() in SQL Server 2012+:
UPDATE #temptable
SET col2 = TRY_CONVERT(numeric(10,5), col1)
WHERE ISNUMERIC(col1) = 1;
SQL Server re-arranges expression valuation in a query. So, the CAST() might be implemented before the WHERE -- resulting in the error. You can make a similar change to the SELECT version of your query.
In SQL Server 2008, you should be able to do effectively the same thing using CASE:
UPDATE #temptable
SET col2 = (CASE WHEN ISNUMERIC(col1) = 1 THEN CONVERT(numeric(10, 5), col1) END)
WHERE ISNUMERIC(col1) = 1;
Note: There may be cases where ISNUMERIC() returns "1", but the value cannot be converted (for instance, overflow). In that case, this version would still fail.
Working on speeding up a query and I've noticed for some reason the more empty columns added to a query the slower it gets.
With only the Id column the query returns 100k records in approx. 1 second.
If I add about 20 empty columns it goes to 4 seconds.
Questions
- What is the default data type of the string in SQL?
- Any way to speed this up?
SELECT Id,
'' as col1,
'' as col2,
'' as col3
FROM myTable
It will depend on how many rows are in your myTable. For ex: If you have 905k rows on mytable, Basically SQL is creating 20 Columns with ' ' for 905k rows
I just tried it in my own table that has 805k rows. For every increment of Columns I add, SQL creates '' values for each row.
Hope this helps you understand it clearer.
The default data type seems to be a varchar(1) -- you can insert it into a temp table and check the temp table structure to confirm. One option you can try is declare a variable and use it rather than the empty spaces:
declare #space varchar(40) = ''
SELECT
id,
#space as col1,
#space as col2,
#space as col3
FROM dbo.[table]
I'm starting with SQL Server 2012 and I have a little problem.
I want to import data from a csv file to a table. One of the fields in the csv file is a string that it has only the values "Yes" or "No".
I've seen that there isn't a boolean type in SQL Server 2012, but there is a bit type instead of boolean.
The question I have is how to store in the table the value 1 when the string is "Yes" and 0 when the string is "No".
I've tried this:
BULK INSERT PRODUCT FROM 'C:\...\products.csv'
WITH (
FIELDTERMINATOR = ';',
ROWTERMINATOR = '\n',
ERRORFILE = 'C:\...\errors.csv',
TABLOCK)
I'm using BULK INSERT because I have thousands of rows, but I don't know if this is the best way or what alternatives I have (better alternatives).
Another way or suggestion to do it would be appreciated.
You either modify the data in the table by replacing 'Yes' with 1, etc or you put the data into a staging table and do the manipulation of data there. I would prefer the latter as it allows you to perform any other data clean up tasks.
As Aaron Bertrand suggests, you can perform the following which will load it into the desired table from your staging table and modify your Yes/No to bits:
INSERT dbo.Product(col1, col2, BITColumn)
SELECT col1, col2, CASE col3 WHEN 'Yes' THEN 1 ELSE 0 END
FROM dbo.StagingTable;
SELECT
col1,
col2,
CASE col3
WHEN 'Yes' THEN CAST(1 AS bit)
WHEN 'No' THEN CAST(0 AS bit)
END
INTO #my_temporary_staging_table
FROM OPENROWSET(
BULK 'C:\...\products.csv',
FIELDTERMINATOR = ';',
ROWTERMINATOR = '\n',
) AS t(col1,col2,col3)