Sql bulk insert calculate values at insert - sql

So data can be imported into SQL Server from .csv files.
Import CSV file into SQL Server
I'm trying to use this to import test data into a database. So I want the dates to be up-to-date. Currently we use .sql files with getdate() so after inserting the dates are all newly generated. But when inserting getdate() with bulk insert from a .csv file it will just say 'getdate()'. The dates are only an example, I need different rows to be calculated differently. So one date might get 5 added to it, another 10.

Although bulk insert does not let you specify function calls, you could work around the problem by changing your table definition: add default constraint to your date column, and do not insert anything into it through bulkinsert. This would ensure that SQL Server fills the column by calling getdate:
ALTER TABLE MyTable ADD CONSTRAINT
DF_MyTable_MyDateColumn_GetDate DEFAULT GETDATE() FOR MyDateColumn

Related

How to import daily csv data into table with generated columns postgres

I'm new to PostgreSQL and and looking for some guidance and best practice.
I have created a table by importing data from a csv file. I then altered the table by creating multiple generated columns like this:
ALTER TABLE master
ADD office VARCHAR(50)
GENERATED ALWAYS AS (CASE WHEN LEFT(location,4)='Chic' THEN 'CHI'
ELSE LEFT(location,strpos(location,'_')-1) END) STORED;
But when I try to import new data into the table I get the following error:
ERROR: column "office" is a generated column
DETAIL: Generated columns cannot be used in COPY.
My goal is to be able to import new data each day to the table and have the generated columns automatically populate in order to transform the data as I would like. How can I do so?
CREATE TEMP TABLE master (location VARCHAR);
ALTER TABLE master
ADD office VARCHAR
GENERATED ALWAYS AS (
CASE
WHEN LEFT(location, 4) = 'Chic' THEN 'CHI'
ELSE LEFT(location, strpos(location, '_') - 1)
END
) STORED;
--INSERT INTO master (location) VALUES ('Chicago');
--INSERT INTO master (location) VALUES ('New_York');
COPY master (location) FROM $$d:\cities.csv$$ CSV;
SELECT * FROM master;
Is this the structure and the behaviour you are expecting? If not, please provide more details regarding your table structure, your importable data and your importing commands.
Also, maybe when you try to import the csv file, the columns are not linked properly, or maybe the delimiter is not properly set. Try to specify each column in the exact order that appear in your csv file.
https://www.postgresql.org/docs/12/sql-copy.html
Note: d:\cities.csv contains:
Chicago
New_York
EDIT:
If columns positions are mixed up between table and csv, the following operation may come in handy:
1. create temporary table tmp (csv_column1 <data_type>, csv_column_2 <data_type>, ...); (including ALL csv columns)
2. copy tmp from '/path/to/file.csv';
3. insert into master (location, other_info, ...) select csv_column_3 as location, csv_column_7 as other_info, ... from tmp;
Importing data using an intermediate table may slow things down a little, but gives you a lot of flexibility.
I was getting the same error when importing to PG from a csv - I found that even though my column was generated, I still had to have it in the imported data, just left it empty. Worked fine when the column name was in there and mapped to my DB col name.

Getting error Use INSERT with a column list to exclude the timestamp column, or insert a DEFAULT into the timestamp column in SQL Server

I need to import data from mysql to SQL Server, but when I run the sql query it I get an error
Use INSERT with a column list to exclude the timestamp column, or insert a DEFAULT into the timestamp column
I know in modified_date column there is default current timestamp, so it doesn't allow me to insert that date, but I need to import same data in SQL Server which is in Mysql database.
Can anyone please help me how can I resolve this issue?

Bulk insert in SQL and insert others values

I have one table with 3 fields
id_Complex | fileLine | date
The field id_Complex, and, that id_complex is the same for the file, that id just chenge when another file is processed is a ID generate from my program, fileLine is just a line from file and date is the date of recording of the line.
Now, my program make a insert in the database for each line read from the file.
I whant to know, if is possible to make a bulk, and that bulk just insert the values in one specific column of table, and, I just send the id_complex to sql, so, the SQL will be make the insert with id_complex I sent for SQL, the lines of file and date.
How I can make that bulk ?
it's possible to make this, Bulk insert with one that has a value predefined
You should in your program proccess input file and generate temp file with correct complex_id and the make bulk insert for this temp file.
After insert just delete temp file.
If I understand what you are asking, you could create a temporary table TempTable and do a bulk insert into it. Then perform an UPDATE from TempTable joining to your permanent table by id_Complex. You can also set the date in this UPDATE statement. Finally, clear out the temporary table.
Alternatively, you could bulk import the file into a temporary table, delete the old permanent table, and rename the temporary table as the permanent table.

add multi rows to coupons sql with csv(one field only)

I have a a table with the structure:
I also have a csv containing all the coupon codes only.
I want to use the same values for the other fields (except id of course).
What would be the be the sql required to insert these rows?
Using phpMyAdmin You can insert data from CSV only if it contains all of the required (NOT NULL column) values - the other (when missing) will be filled with defaults, NULL or auto_incremeneted. But Using this tool it is not possible to insert only codes from CSV and use some same values for the other fields. So here You have two options: either set the default values of that columns not being updated from CSV to the desired ones or create a PHP import script, that will do that for You (if You do not want to change the DB table schema).

Using SQL Server, how do I manually set a given column's value during a Bulk Insert of a CSV file?

Background:
I have a collection of CSV files that each have the same data structure. Each file was generated on a given calendar day so, for example, I might have one file from 10/8/13 with 20,000 records, one file from 10/9/13 with 50,000 records, and so on.
All of this CSV data needs to be imported into a SQL Server table, but I have added a column for RecordDate which needs to be set to the value of the day the record was generated.
In total, I have fourteen of these CSV files, so I don't mind running fourteen bulk insert operations like this one:
BULK INSERT CSVTest
FROM 'c:\csvtest.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
But each time I do this, I need to set that RecordDate column to the date that the particular CSV I am inserting was generated.
Question:
How do I manually set a given column's value during a bulk Insert of a CSV file?
A simple way would be to add a default constraint to the RecordDate column on the table before each bulk insert is executed (or modify the existing constraint, if there is one). Each inserted row will pick up the default for the RecordDate column (obviously assuming the row doesn't contain a value getting imported into RecordDate). Remove the constraint after the bulk insert is completed.
alter table <Table Name>
add constraint <constraint name>
default <relevant date here> for [RecordDate]
<run the bulk insert...>
alter table <Table Name>
drop constraint <Constraint Name>