I am looking for a good generic SP in SQL server to load CSV files using the header row to create the necessary table columns for the temp table.
For example, I have a table with 20 columns in it, and I have been told to load a csv file into SQL server I have to load all the columns into the table. So, that means I have to go in and create all this by hand and then load the data. Think that is a time waster.
So, I was wondering if there was a way to read the header row use it to create the columns, whether my table has 10 or 20 and the use bulk update to load the data itself.
Suggestions?
Related
I have a csv file with two columns. The file has over 200.000 rows. Inside database I have the same table with the same values.
How can I write a script so that I can search for the values that are present in file but not in database?
I am using SQL Developer for this
Creating an External table is the best option when you want to read the contents of a flat-file using a select query.
Click here to know more about how to create an external table.
After creating the external table, you can make use of a query similar to below to identify the records which are exclusively available in the external table(i.e. flat file).
select *
from new_external_table et
where not exists (select 1 from source_table st where et.column_name=st.column_name);
I have table contains 5 million records and one of its column contains PDF data in blob column. I want to move this table records from one server into another Server.
I tried below methods.
Import and Export wizard
Linked server.
But it is taking some time.So any other bulk copy method or commands to transfer this table records from one server to another server quickly.
I am trying to update only one column of the 1 million records in a table based on the value in the CSV file.
Sample CSV file:
1,Apple
2,Orange
3,Mango
The first column in the file is the PK I will use to filter the record and the second column is the new value of the column in the table I want to update.The PK in the CSV file may or may not exist in the DB though. I was thinking of creating a script to make a million update statements based on the file. I would like to know if there are any better way on how could I do this?
personally i would
load the CSV file into a new table using sqlldr
make sure the correct indexes are on the new and existing table
write ONE update statement to update the existing table from the new
one
I would:
Create an external table using the csv
Update existing table from the new external table in just one update
I have a .csv file that gets pivoted into 6 million rows during a SSIS package. I have a table in SQLServer 2005 of 25 million + rows. The .csv file has data that duplicates data in the table, is it possible for rows to get updated if it already exists or what would be the best method to achieve this efficiently?
Comparing 6m rows against 25m rows is not going to be too efficient with a lookup or a SQL command data flow component being called for each row to do an upsert. In these cases, sometimes it is most efficient to load them quickly into a staging table and use a single set-based SQL command to do the upsert.
Even if you do decide to do the lookup - split the flow into two streams, one which inserts and the other which inserts into a staging table for an update operation.
If you don't mind losing the old data (ie. the latest file is all that matters, not what's in the table) you could erase all the records in the table and insert them again.
You could also load into a temporary table and determine what needs to be updated and what needs to be inserted from there.
You can use the Lookup task to identify any matching rows in the CSV and the table, then pass the output of this to another table or data flow and use a SQL task to perform the required Update.
Is it possible for me to write an SQL query from within PhpMyAdmin that will search for matching records from a .csv file and match them to a table in MySQL?
Basically I want to do a WHERE IN query, but I want the WHERE IN to check records in a .csv file on my local machine, not a column in the database.
Can I do this?
I'd load the .csv content into a new table, do the comparison/merge and drop the table again.
Loading .csv files into mysql tables is easy:
LOAD DATA INFILE 'path/to/industries.csv'
INTO TABLE `industries`
FIELDS TERMINATED BY ';'
IGNORE 1 LINES (`nogaCode`, `title`);
There are a lot more things you can tell the LOAD command, like what char wraps the entries, etc.
I would do the following:
Create a temporary or MEMORY table on the server
Copy the CSV file to the server
Use the LOAD DATA INFILE command
Run your comparison
There is no way to have the CSV file on the client and the table on the server and be able to compare the contents of both using only SQL.
Short answer: no, you can't.
Long answer: you'll need to build a query locally, maybe with a script (Python/PHP) or just uploading the CSV in a table and doing a JOIN query (or just the WHERE x IN(SELECT y FROM mytmmpTABLE...))
For anyone new asking, there is this new tool that i used : Write SQL on CSV file