Bulk insert based on condition SQL Server - sql

I am importing data into SQL Server table using bulk insert.
BULK INSERT MySampleDB.dbo.Sample
FROM ''' + #location + '''
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ''","'',
ROWTERMINATOR = ''\n'',
TABLOCK
)
I need to do check condition like if the column value whether it is an integer. If it is not an integer then I need to skip that entire record from inserting.

use case when script to understand its an integer or not.
look this

In my thought, you can not do this in the bulk insert but you can this help of the following the steps in the below.
Step-1 : First of all, you can add a varchar data type column to your table so that you can map CSV column to this column.
Step-2 : You can update the int column through the TRY_CONVERT function
CREATE TABLE Bulk_InsertTable
(ColImport VARCHAR(100) , ColINT INT)
GO
CREATE VIEW Viev_Bulk_InsertTable
AS SELECT ColImportVarchar
FROM Bulk_InsertTable
GO
BULK INSERT Viev_Bulk_InsertTable FROM 'C:\Test.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
TABLOCK
)
UPDATE Bulk_InsertTable
SET ColINT = TRY_CONVERT(INT, ColImport);
+--------------+-----------+
| ColImport | ColINT |
+--------------+-----------+
| 669165933 | 669165933 |
| 871543967AAA | NULL |
| 871543967AAA | NULL |
| 514321792 | 514321792 |
| 115456712 | 115456712 |
+--------------+-----------+

Related

Query on how to replace numerical data from a string/column

I have values in my column as below. Can anyone help me with how to replace any numeric data present in a column or string to blank using SQL Server query?
Below is the column data. How do I replace the numbers to blank and display only underscores.
You could approach this by counting the number of underscores, and then generating a string containing this number of underscores:
SELECT Column1, REPLICATE('_', LEN(Column1) - LEN(REPLACE(Column1, '_', '')))
FROM yourTable;
Demo
Here is a more generic solution. It will handle not just the underscore chars. It will work starting from SQL Server 2017 onwards.
As #Squirrel correctly mentioned, the TRANSLATE() function is very handy for such cases.
SQL
-- DDL and sample data population, start
DECLARE #tbl TABLE (ID INT IDENTITY PRIMARY KEY, col VARCHAR(256));
INSERT INTO #tbl (col) VALUES
('2413347_6752318'),
('7263_872_767'),
('123Code456');
-- DDL and sample data population, end
SELECT col AS [Before]
, REPLACE(TRANSLATE(col, '0123456789', SPACE(10)), SPACE(1), '') AS [After]
FROM #tbl;
Output
+-----------------+-------+
| Before | After |
+-----------------+-------+
| 2413347_6752318 | _ |
| 7263_872_767 | __ |
| 123Code456 | Code |
+-----------------+-------+

How to handle an array in a stored procedure on DB2?

I'm developing an app which store data in DB2, and I should be able to 'delete' data in bulk in a table of a DB. Actually the way to 'delete' the data is by changing its 'deleted' value to 'Y'.
The form of the table is:
id | name | deleted |
1 | name1 | N |
2 | name2 | N |
...
x | namex | N |
What I want to do is to make a SQL stored procedure which take as a parameter one array with the IDs of the items I need to change from 'N' to 'Y'.
The way I do it (individually) is:
UPDATE MyTable DELETED = 'Y' where id = '1';
So with an stored procedure I should only send the array with this form:
[1, 20, 5, ... , x]
and the rows with those Id should be changed to Y.
The structure for the stored procedure I was thinking about is:
PROCEDURE deleteSeveral (arrayWithIds)
LANGUAGE SQL
BEGIN
-- loop for ids array
UPDATE MyTable DELETED = 'Y' where id = arrayWithIds[i];
-- Ciclo para recorrer el arreglo
END
Could anybody help me with this? Thanks!
Try to pass the list of IDs as an "xml-like" string:
UPDATE MyTable t
SET DELETED = 'Y'
where exists (
select 1
from xmltable (
'$D/d/i' passing xmlparse(document '<d><i>1</i><i>20</i><i>5</i></d>') as "D"
columns
i int path '.'
) p
where p.i=t.id
)

How can I create an external table using textfile with presto?

I've a csv file in hdfs directory /user/bzhang/filefortable:
123,1
And I use the following to create an external table with presto in hive:
create table hive.testschema.au1 (count bigint, matched bigint) with (format='TEXTFILE', external_location='hdfs://192.168.0.115:9000/user/bzhang/filefortable');
But when I run select * from au1, I got
presto:testschema> select * from au1;
count | matched
-------+---------
NULL | NULL
I changed the comma to the TAB as the delimeter but it still returns NULL. But If I modify the csv as
123
with only 1 column, the select * from au1 gives me:
presto:testschema> select * from au1;
count | matched
-------+---------
123 | NULL
So maybe I'm wrong with the file format or anything else?
I suppose the field delimiter of the table is '\u0001'.
You can change the ',' to '\u0001' or change the field delimiter to ',' , and check your problem was solved

Convert number to string (number in string or null in string)

In SQL Server, what is the shortest way to convert a number to string (number in string or null in string):
Example:
number 1 ---> output '1'
number null --> output 'null'
Use CAST and CONVERT (Transact-SQL).
SQL Fiddle
MS SQL Server 2012 Schema Setup:
create table T
(
Number int
);
insert into T values(1);
insert into T values(null);
Query 1:
select cast(Number as varchar(11))
from T;
Results:
| COLUMN_0 |
|----------|
| 1 |
| (null) |
Or isnull(cast(Number as varchar(11)), 'null') if you are looking for the string value null.
Not sure what you man by shortest and why that is important but this is a bit shorter isnull(left(Number, 11), 'null').

Insert a empty string on SQL Server with BULK INSERT

Example table contains the fields Id (the Identity of the table, an integer); Name (a simple attribute that allows null values, it's a string)
I'm trying a CSV that contains this:
1,
1,""
1,''
None of them gives me a empty string as the result of the bulk insertion. I'm using SQL Server 2012.
What can I do?
As far as I know, bulk insert can't insert empty string, it can either keep null value or use default value with keepnulls option or without keepnulls option. For your 3 sample records, after insert database, it should be like:
| id | name
| 1 | NULL
| 1 | ""
| 1 | ''
The reason is, the bulk insert will treat your first row, second column value as null; for other 2 rows, will take the second column value as not null, and take it as it is. Instead of let Bulk Insert to insert empty string value for you, you can let you table column having default value as empty string.
Example as following:
CREATE TABLE BulkInsertTest (id int, name varchar(10) DEFAULT '')
Bulk Insert same CSV file into table
BULK INSERT Adventure.dbo.BulkInsertTest
FROM '....\test.csv'
WITH
(
FIELDTERMINATOR ='\,',
ROWTERMINATOR ='\n'
)
SELECT * FROM BulkInsertTest
The result will be like following: (The first row in your CSV will get an empty string)
| id | name
| 1 |
| 1 | ""
| 1 | ''
Please bear in mind that the specified DEFAULT value will only get inserted if you are not using the option KEEPNULLS.
Using the same example as above, if you add the option KEEPNULLS to the BULK INSERT, i.e.:
BULK INSERT BulkInsertTest
FROM '....\test.csv'
WITH
(
FIELDTERMINATOR ='\,',
ROWTERMINATOR ='\n',
KEEPNULLS
)
will result in the default column value being ignored and NULLs being inserted fro empty strings, i.e:
SELECT * FROM BulkInsertTest
will now give you:
id name
1 NULL
1 ""
1 ''
There does not seem to be a good reason to add KEEPNULLS this in your example, but I came across a similar problem just now, where KEEPNULLS was required in the BULK INSERT.
My solution was to define make the column [name] in the staging table BulkInsertTest NOT NULL but remember that the DEFAULT column value gets ignored and an empty string gets inserted instead.
See more here : Keep Nulls or UseDefault Values During Bulk Import (SQL Server)