I have a comma delimited text file containing discrepancies across two different databases, and need to update one of the databases with information from the aforementioned text file. The text file is in the following format:
ID valueFromDb1 valueFromDb2
1 1234 4321
2 2345 5432
... ... ...
I need to go update a table by checking for the ID value, and where valueFromDb1 exists replace it with valueFromDb2. There are around 11,000 rows that need to be updated. Is there a way I can access the information in this text file directly through an sql query? My other thought was to write a java program to do this for me, but I'm not convinced that is the easiest solution.
The article below demonstrates one way to read a text file in MS SQL Server by using xp_cmdshell. In order for it to work the file has to be on one of the drives of the server. Once you have the file loaded into a table variable (which is what the code in the article will do) you should be able to do the joins and updates pretty easily. Let us know if you need any other help.
http://www.kodyaz.com/articles/read-text-file-using-xp_cmdshell.aspx
Related
I'm trying to find a specific string on my database. I'm currently using FlameRobin to open the FDB file, but this software doesn't seems to have a properly feature for this task.
I tried the following SQL query but i didn't work:
SELECT
*
FROM
*
WHERE
* LIKE '126278'
After all, what is the best solution to do that? Thanks in advance.
You can't do such thing. But you can convert your FDB file to a text file like CSV so you can search for your string in all the tables/files at the same time.
1. Download a database converter
First step you need a software to convert you databse file. I recommend using Full Convert to do it. Just get the free trial and download it. It is really easy to use and it will export each table in a different CSV file.
2. Find your string in multiple files at the same time
For that task you can use the Find in files feature of Notepad++ to search the string in all CSV files located at the same folder.
3. Open the desired table on FlameRobin
When Notepad++ highlight the string, it shows in what file it is located and the number of the line. Full Convert saves each CSV with the same name as the original table, so you can find it easily whatever database manager software you are using.
Here is Firebird documentation: https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25.html
You need to read about
Stored Procedures of "selectable" kind,
execute statement command, including for execute statement variant
system tables, having "relation" in names.
Then in your SP you do enumerate all the tables, then you do enumerate all the columns in those tables, then for every of them you run a usual
select 'tablename', 'columnname', columnname
from tablename
where columnname containing '12345'
over every field of every table.
But practically speaking, it most probably would be better to avoid SQL commands and just to extract ALL the database into a long SQL script and open that script in Notepad (or any other text editor) and there search for the string you need.
I have 4 different text files each file with different name and different column in it place in one folder. I want these 4 four files to be inserted or updated into 4 different existing tables. So How to read read these 4 files dynamically and insert them into their respective table dynamically in SSIS.
Well, you need to use Data Flow Task to move data from a Flat File Source to a Table Destination (OLEDB Destination perhaps). Are the columns in your file delimited in any way? For example, with any of these: (;),(|) or something like that? if it is, you can create a FlatFileConnectionManager and set that to split the columns. If not, you might need to use the FixedWidth option to separate your columns. To use the OLEDB Destination, you will need to create a OLEDB connectionManager to point to the table in your database. I could help you more if I had more information about the files you want to read the data from.
EDIT
Well you said at the start you were working with 4 files and 4 tables, so you can create 4 Flat Destination sourcers with 4 OLEDB destinations aswell (1 of each for each flat file). If I understood you correctly, these 4 files can or cannot exist yet. So if you know the names that the files will get, change the Package Property DelayValidation to true, and then create a connection with a sample text file. You do this so the File path gets saved. The tables, in my opinion DO need to exist. Now, when you said:
i want to load all the text files into each different existing table whenever there is files inside the folder.
The only way I know you can do something similar, is to schedule the execution of your package at a certain time with SQL Server Agent Job. Please let me know if this was what you were looking for.
My team uses a query that generates a text file over 500MB in size.
The query is executed from a Korn Shell script on an AIX server connecting to DB2.
The results are ordered and grouped by a specific field.
My question: Is it possible, using SQL, to write all rows with this specific field value to its own text file?
For example: All rows with field VENDORID = 1 would go to 1.txt, VENDORID = 2 to 2.txt, etc.
The field in question currently has 1000+ different values, so I would expect the same amount of text files.
Here is an alternative approach that gets each file directly from the database.
You can use the DB2 export command to generate each file. Something like this should be able to create one file :
db2 export to 1.txt of DEL select * from table where vendorid = 1
I would use a shell script or something like Perl to automate the execution of such a command for each value.
Depending on how fancy you want to get, you could just hardcode the extent of vendorid, or you could first get the list of distinct vendorids from the table and use that.
This method might scale a bit better than extracting one huge text file first.
I have about 100,000+ delimited text files (they dont have same number of columns in each file, e.g. some files have 10 columns, some have 20 and so on). I need to upload all of them to SQL server. Please suggest how can I do it?
I also have an excel spreadsheet enlisting the names/path where files are stored and also the number of columns in each text file. I am clueless how to go about it.
Thanks in advance
I assume you are able to use C# (or other programming language) to create an app which will help you to complete the task. The program should do the following:
Run through all the files and determine all the columns you need.
Create a table on SQL server with columns the program found. Set datatype varchar(max) for each column.
Run through all the files again and insert data to the table. There is 2 ways you can go:
a) Insert data row by row. Pretty slow, but simple.
b) If you use C# you can implement your own DataReader and use SqlBulkCopy class to bulk insert data into the table
Here is some link which may help you:
http://www.sqlteam.com/article/use-sqlbulkcopy-to-quickly-load-data-from-your-client-to-sql-server
http://www.michaelbowersox.com/2011/12/22/using-a-custom-idatareader-to-stream-data-into-a-database/
http://daniel.wertheim.se/2010/11/10/c-custom-datareader-for-sqlbulkcopy/
I have an Excel spreadsheet with a few thousand entries in it. I want to import the table into a MySQL 4 database (that's what I'm given). I am using SQuirrel for GUI access to the database, which is being hosted remotely.
Is there a way to load the columns from the spreadsheet (which I can name according to the column names in the database table) to the database without copying the contents of a generated CSV file from that table? That is, can I run the LOAD command on a local file instructing it to load the contents into a remote database, and what are the possible performance implications of doing so?
Note, there is a auto-generated field in the table for assigning ids to new values, and I want to make sure that I don't override that id, since it is the primary key on the table (as well as other compound keys).
If you only have a few thousand entries in the spreadsheet then you shouldn't have performance problems (unless each row is very large of course).
You may have problems with some of the Excel data, e.g. currencies, best to try it and see what happens.
Re-reading your question, you will have to export the Excel into a text file which is stored locally. But there shouldn't be any problems loading a local file into a remote MySQL database. Not sure whether you can do this with Squirrel, you would need access to the MySQL command line to run the LOAD command.
The best way to do this would be to use Navicat if you have the budget to make a purchase?
I made this tool where you can paste in the contents of an Excel file and it generates the create table, and insert statements which you can then just run. (I'm assuming squirrel lets you run a SQL script?)
If you try it, let me know if it works for you.