Bulk insert in SQL and insert others values - sql

I have one table with 3 fields
id_Complex | fileLine | date
The field id_Complex, and, that id_complex is the same for the file, that id just chenge when another file is processed is a ID generate from my program, fileLine is just a line from file and date is the date of recording of the line.
Now, my program make a insert in the database for each line read from the file.
I whant to know, if is possible to make a bulk, and that bulk just insert the values in one specific column of table, and, I just send the id_complex to sql, so, the SQL will be make the insert with id_complex I sent for SQL, the lines of file and date.
How I can make that bulk ?
it's possible to make this, Bulk insert with one that has a value predefined

You should in your program proccess input file and generate temp file with correct complex_id and the make bulk insert for this temp file.
After insert just delete temp file.

If I understand what you are asking, you could create a temporary table TempTable and do a bulk insert into it. Then perform an UPDATE from TempTable joining to your permanent table by id_Complex. You can also set the date in this UPDATE statement. Finally, clear out the temporary table.
Alternatively, you could bulk import the file into a temporary table, delete the old permanent table, and rename the temporary table as the permanent table.

Related

How to append unique values from temp_tbl into original_tbl (SQL Server)?

I have a table that I'm trying to append unique values to. Every month I get list of user logins to import into this table. I would like to keep all the original values and just append the new and unique values onto the existing table. Both the table and the flatfile have a single column, with unique values, built like this:
_____
login
abcde001
abcde002
...
_____
I'm bulk ingesting the flat file into a temp table, with this:
IF OBJECT_ID('tempdb..#FLAT_FILE_TBL') IS NOT NULL
DROP TABLE #FLAT_FILE_TBL
CREATE TABLE #FLAT_FILE_TBL
(
ntlogin2 nvarchar(15)
)
BULK INSERT #FLAT_FILE_TBL
FROM 'C:\ImportFiles\logins_Dec2021.csv'
WITH (FIELDTERMINATOR = ' ');
Is there a join that would give me the table with existing values + new unique values appended? I'd rather not hard code a loop to evaluate it line by line.
Something like (pseudocode):
append unique {login} from temp_tbl into original_tbl
Hopefully it's an easy answer for someone out there.
Thanks!
Poster on Reddit r/sql provided this answer, which I'm pursuing:
Merge statement?
It looks like using a merge statement will do exactly what I want. Thanks for those who already posted replies.
You can check if a record exists using 'EXISTS' clause and insert if it doesn't exist in the target table. You can also use MERGE statement to achieve the same. Depending on what you want to do to the existing records in the target table, you can modify the Merge statement. Here since you only want to insert new records, you need to specify only what you want to do when a new record comes in. Here is an example
MERGE original_tbl T
USING temp_tbl S
ON T.login = S.login
WHEN NOT MATCHED THEN
INSERT (login)
VALUES(S.login)
Another solution would be to left join the target table to the temp table and insert only when the record doesn't exist.
INSERT INTO original_tbl(login)
SELECT S.Login
FROM temp_tbl S
LEFT JOIN original_tbl T
ON S.Login = T.Login
WHERE T.Login IS NULL

How to recalculate table created by CTAS?

I have created table using this statement:
CREATE TABLE tablename STORED AS PARQUET AS (SELECT ...)
How can i recalculate it without DROP TABLE - CREATE TABLE flow?
In Impala, The INSERT INTO syntax appends data to a table. The existing data files are left as-is, and the inserted data is put into one or more new data files.
The INSERT OVERWRITE syntax replaces the data in a table. Currently, the overwritten data files are deleted immediately; they do not go through the HDFS trash mechanism.
So If you want to replace the data in the table tablename without undergoing drop table and create table, you can run a query like this
INSERT OVERWRITE TABLE tablename SELECT * from <source_tablename>;

Sql bulk insert calculate values at insert

So data can be imported into SQL Server from .csv files.
Import CSV file into SQL Server
I'm trying to use this to import test data into a database. So I want the dates to be up-to-date. Currently we use .sql files with getdate() so after inserting the dates are all newly generated. But when inserting getdate() with bulk insert from a .csv file it will just say 'getdate()'. The dates are only an example, I need different rows to be calculated differently. So one date might get 5 added to it, another 10.
Although bulk insert does not let you specify function calls, you could work around the problem by changing your table definition: add default constraint to your date column, and do not insert anything into it through bulkinsert. This would ensure that SQL Server fills the column by calling getdate:
ALTER TABLE MyTable ADD CONSTRAINT
DF_MyTable_MyDateColumn_GetDate DEFAULT GETDATE() FOR MyDateColumn

BULK INSERT into specific columns?

I want to bulk insert columns of a csv file to specific columns of a destination table.
Description - destination table has more columns than my csv file. So, I want the csv file columns to go to the right target columns using BULK INSERT.
Is this possible ? If yes, then how do I do it ?
I saw the tutorial and code at - http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-into-sql-server/
and http://www.codeproject.com/Articles/439843/Handling-BULK-Data-insert-from-CSV-to-SQL-Server
BULK INSERT dbo.TableForBulkData
FROM 'C:\BulkDataFile.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
They don't show you how you can control where data is inserted.
Yes, you can do this. The easiest way is to just create a View that Selects from the target table, listing the columns that you want the data to go to, in the order that they appear in the source file. Then BULK INSERT to your View instead of directly to the Table.

merging data from old table into new for a monthly archive

I have a sql statement to insert data into a table for archiving, but I need a merge statement to run on a monthyl basis to update the new table(2) with any data that changed in the old table(1) that should now be moved into archive.
Part of the issue is to remove the moved data from the old table. My insert is not doing that, but I need to have it to where the saved data is purged from the original table.
Is there a single sql statement that will move data out of one table into another in this way? Or does it need to be a two step operation?
the initial statement moved data depending on age and a few other relative factors.
insert is:
INSERT /*+ append */
INTO tab1
SELECT *
FROM tab2
WHERE (Postingdate < TO_DATE ('2001/07/01', 'yyyy/mm/dd')
OR jobname IS NULL)
AND STATUS <> '45';
All help appreciated...
The merge statement will let you do this in one statement by adding a delete statement in the update clause. See Oracle Documentation on Merge.
I think you should try this with a partition table. My idea is to create table which have range partition on date:
create table(id number primary key,name varchar,J_date date )
partition by range(J_date)(PARTITION one_mnth VALUES LESS THAN(sysdate-30)),
partition by range(J_date)(PARTITION one_mnth VALUES LESS THAN(maxvalue)));
then move that partition in to another table and and truncate that partition