Appending multiple SQLite DB files into one single giant file - sql

I have multiple SQLite .db files. Each DB file has the same schema having multiple tables but different values in them. I want to merge the DB files in such a way that if a duplicate row exists in other database files, it is discarded otherwise appended in a master DB file that would contain values from all the files.
Preferably, I would like to automate the process.

Related

CSV file to SQL Table incremental

I have an external application which creates CSV files. I would like to write these files into SQL automatically but as incremental.
I was looking into Bulk Insert, but i do not think this is incremental. The CSV files can get huge so incremental will be the way to go.
Thank you.
The usual way to handle this is to bulk insert the entire CSV into a staging table, and then do the incremental merge of the data in the staging table into the final destination table with a stored procedure.
If you are still concerned that the CSV files are too big for this, the next step is to write a program that reads the CSV, and produces a truncated file with only the new/changed data that you want to import, and then bulk insert that smaller CSV instead of the original one.
Create a text or csv file with names of all the csv files which you want to load in the table. You can include file path if not repeated. You can do this using shell scripting.
Then make a temporary table which loads all the csv file names which need to be inserted. Using a procedure.
Using above temporary table, loop by the number of rows and load it to the target table(without truncating in the loop). If truncate is required then do it before the loop. You can load data to staging if any transformation is required(use procedure for transformation)
We also had the same problem and we were using this method. Recently, we switched to using Python which does all the task and loads data into a staging table. After transformations, it is finally loaded into the target table.

inserting multiple text files

I have 4 different text files each file with different name and different column in it place in one folder. I want these 4 four files to be inserted or updated into 4 different existing tables. So How to read read these 4 files dynamically and insert them into their respective table dynamically in SSIS.
Well, you need to use Data Flow Task to move data from a Flat File Source to a Table Destination (OLEDB Destination perhaps). Are the columns in your file delimited in any way? For example, with any of these: (;),(|) or something like that? if it is, you can create a FlatFileConnectionManager and set that to split the columns. If not, you might need to use the FixedWidth option to separate your columns. To use the OLEDB Destination, you will need to create a OLEDB connectionManager to point to the table in your database. I could help you more if I had more information about the files you want to read the data from.
EDIT
Well you said at the start you were working with 4 files and 4 tables, so you can create 4 Flat Destination sourcers with 4 OLEDB destinations aswell (1 of each for each flat file). If I understood you correctly, these 4 files can or cannot exist yet. So if you know the names that the files will get, change the Package Property DelayValidation to true, and then create a connection with a sample text file. You do this so the File path gets saved. The tables, in my opinion DO need to exist. Now, when you said:
i want to load all the text files into each different existing table whenever there is files inside the folder.
The only way I know you can do something similar, is to schedule the execution of your package at a certain time with SQL Server Agent Job. Please let me know if this was what you were looking for.

How to split big sql dump file into small chunks and maintain each record in origin files despite later other records deletions

Here's what I want to do with (MySQL example):
dumping only structure - structure.sql
dumping all table data - data.sql
spliting data.sql and putting each table data info seperate files - table1.sql, table2, sql, table3.sql ... tablen.sql
splitting each table into smaller files (1k lines per file)
commiting all files in my local git repository
coping all dir out to remote secure serwerwer
I have a problem with #4 step.
For instance I split table1.sql into 3 files: table1_a.sql and table1_b.sql and table1_c.sql.
If on new dump there are new records that is fine - it's just added to table1_b.sql.
But if there are deleted records that were in table1_a.sql all next records will move and git will treat files table1_b.sql and table1_c.sql as changed and that not OK.
Basicly it destroys whole idea keeping sql backup in SCM.
My question: How to split big sql dump file into small chunks and maintain each record in origin files despite later other records deletions?
To Split SQL Dumps in files of 500 lines execute in your terminal:
$ split -l 5000 hit_2017-09-28_20-07-25.sql dbpart-
Don't split them at all. Or split them by ranges of PK values. Or split them right down to 1 db row per file (and name the file after tablename + the content of the primary key).
(That apart from the even more obvious XY answer, which was my instinctive reaction.)

Extract specific data from full mysqldump backup

I am making regular backups of my MySQL database with mysqldump. This gives me a .sql file with CREATE TABLE and INSERT statements, allowing me to restore my database on demand. However, I have yet to find a good way to extract specific data from this backup, e.g. extract all rows from a certain table matching certain conditions.
Thus, my current approach is to restore the entire file into a new temporary database, extract the data I actually want with a new mysqldump call, delete the temporary database and then import the extracted lines into my real database.
Is this really the best way to do this? Is there some sort of script that can directly parse the .sql file and extract the relevant lines? I don't think there is an easy solution with grep and friends unfortunately, as mysqldump generates INSERT statements that insert many values per line.
The solution to this just ended up being to import the whole file, extract the data I needed and drop the database again.

mysql query to import data from dump to existing data and overwrite new entry

I want to import new DUMP file to my old database which has same schema. I want to overwrite if there is anychange in record with dump file record I am importing and want to add new records. I dont want to delete the existing records in the DB I am gonna import.
If you create the dump file using mysqldump, you can give it the option --replace,which generates a dump file containing replace statements instead of insert statements.
When this dump file is loaded into MySQL, records that matches primary or unique keys in the database will replace the old ones, while records not matching existing keys will be inserted.