Query multiple databases with variable database names - sql

I have about 15 databases with the same tables that I need to run the same query on and combine. I usually copy and paste the query and rename the database each time but this isn't scalable and I may need to add additional databases in the future.
To avoid having to go and add each one, is there a way I can essentially loop through a table I create with the database name and a few flags and run the query for any database with the correct flags?

Related

How to copy and rename multiple tables in SQL Server?

I want to copy and rename 100+ tables in a SQL Server database, retaining all their columns and data.
I am cloning a website and want to keep the database information all on the same database.
For example, say I have 100 tables that begin with the prefix "foo" and I want to create new tables with the prefix "anotherfoo" that contain all the same data.
How do can I do this in the most efficient way possible?

How can I copy and overwrite data of tables from database1 to database2 in SQL Server

I have a database1 which has more than 500 tables and I have database2 which also has the same number of tables and in both the databases the name of tables are same.. some of the tables have different table definitions, for example a table reports in database1 has 9 columns and the table reports in database2 has 10.
I want to copy all the data from database1 to database2 and it should overwrite the same data and append the columns if structure does not match. I have tried the import export wizard in SQL Server 2008 but it gives an error when it comes to the last step of copying rows. I don't have the screen shot of that error right now, it is my office PC. It says that error inserting into the readonly column xyz, some times it says that vs_isbroken, for the read only column error as I mentioned a enabled the identity insert but it did not help..
Please help me. It is an opportunity in my office for me.
SSIS and SQL Server 2008 Wizards can be finicky tools.
If you get a "can't insert into column ABC", then it could be one of the following:
Inserting into a PK column -> when setting up the mappings, you need to indicate to overwrite the value
Inserting into a column with a smaller range -> for example from nvarchar(256) into nvarchar(50)
Inserting into a calculated column (pointed out by #Nick.McDermaid)
You could also get issues with referential integrity if your database uses this (most do).
If you're going to do this more often, then I suggest you build an SSIS package instead of using the wizard tooling. This way you will see warnings on all sorts of issues like the ones I've described above. You can then run your package on demand.
Another suggestion I would make, is that you insert DB1 into "stage" tables in DB2. These tables should have no relational integrity and will allow you to break the process into several steps as follows.
Stage the data from DB1 into DB2
Produce reports/queries on issues pertinent to your database/rules
Merge the data from stage tables into target tables using SQL
That last step is where you can use merge statements, or simple insert/updates depending on a key match. Using SQL here in the local database is then able to use set theory to manage the overlap of the two sets and figure out what is new or to be updated.
SSIS "can" do this, but you will not be able to do a bulk update using SSIS, whereas with SQL you can. SSIS would do what is known as RBAR (row by agonizing row), something slow and to be avoided.
I suggest you inform your seniors that this will take a little longer to ensure it is reliable and the results reportable. Then work step by step, reporting on each stages completion.
Another two small suggestions:
Create _Archive tables of each of the stage tables and add a Tstamp column to each. Merge into these after the stage step which will allow you to quickly see when which rows were introduced into DB2
After stage and before the SQL merge step, create indexes on your stage tables. This will improve the merge performance
Drop those Indexes after each merge, this will increase the bulk insert Performance
Basic on Staging (response to question clarification):
Links:
http://www.codeproject.com/Articles/173918/How-to-Create-your-First-SQL-Server-Integration-Se
http://www.jasonstrate.com/tag/31daysssis/
http://blogs.msdn.com/b/andreasderuiter/archive/2012/12/05/designing-an-etl-process-with-ssis-two-approaches-to-extracting-and-transforming-data.aspx
Staging is the act of moving data from one place to another without any checks.
First you need to create the target tables, the schema should match the source tables.
Open up BIDS and create a new Project and in it a new SSIS package.
In the package, create a connection for the source server and another for the destination.
Then create a data flow step, in the step create a data source for each table you want to copy from.
Connect each source to a new data destination and set the appropriate connection and table.
When done, save and do a test run.
Before the data flow step, you might like to add a SQL step that will truncate all the target tables.
If you're open to using tools then what about using something like Red Gate Sql Compare and Red Gate SQL Data Compare?
First I would use data compare to manage the schema differences, add the new columns you want to your destination database (database2) from the source (database1). Then with data compare you match the contents of the tables any columns it can't match based on names you specify how to handle. Then you can pick and choose what data you want to copy from your destination. So you'll see what data is new and what's different (you can delete data in the destination that's not in the source or ignore it). You can either have the tool do the work or create you a script to run when you want.
There's a 15 day trial if you want to experiment.
Seems like maybe you are looking for Replication technology as is offered by SQL Server Replication.
Well, if i understood your requirement correctly, you need to make database2 a replica of database1. Why not take a full backup of database1 and restore it as database2? Your database2 will be exactly what database1 is at the time of backup.

Export selective tables with names from a query

Our production database has around 2000 tables and all of them are listed in a table (Project) with another column marking them as 1 for test and 0 for prod. I need to export the tables marked as 0 to our test environment.
So how is it possible in SQL Server or SSIS to export some of the tables whose names will be result of a query?
Table names will be result of this query:
select tableName from [Project] where test = 0
The task to take a part of database structure and data is not trivial unless your tables do not indexes, relations and so on. The task that relatively easier is to drop what you do not need. But again if your tables have any relations you need to do dropping in correct order. So the task could be completed in the following steps:
Add one more column to table Project, say DropOrder. Fill it manually with correct order taking in mind your current constraints.
Make backup your test database and restore it at production location.
Run script to drop test tables. This script can use simple dynamic sql DROP statement.
Of course, this is general idea without knowing much details. For example, may be better to swap step 3 and 2 or have more complex functionality for dropping script.

Updating/Inserting tables in one database from another database

How can I sync two databases and do a manual refresh on the entities on either of the database whenever I want?
Let's say I have two databases DB1(prod) and DB2(dev). I want to update/insert only a few tables from prod DB to dev DB. How could I achieve this? Is this possible instead of DBlink since I do not have privileges to create a database link?
If you only want to do a manual refresh set up an import/export/datapump script to copy the data across if there is not too much data involved. If there is a large amount of data you could write some pl/sql as described above to only move the new/changed rows. This will be easier if your data has fields such as created/updated_on

Delete all rows containing a specific number across multiple tables in a postgresql database?

I'm fairly new to SQL and I have a large database that needs some cleanup. In most of the tables, there is a column called "ID number" and I want to go through all of these tables, check each ID number to see if it is on a list that I have of bad IDs, and if it is delete the entire table row containing the ID. Problem is, the list of bad IDs alone is over 3 million long and the total number of table entries is in the hundreds of millions. I don't really know where to start with this and was wondering if anyone could help me out?
You can do this with PL/PgSQL, using a query against the system catalogs to build DELETE queries with format(...) that you then run using EXECUTE.
There are lots of existing examples of such dynamic SQL on Stack Overflow, and of how to query the catalogs to build table lists. Use pg_catalog.pg_class and pg_catalog.pg_attribute or use the information_schema for schema info.
Remember to use format with the %I format-specifier for identifiers, don't just concatenate SQL text with ||.
See:
Dynamic SQL with PL/PgSQL EXECUTE
format function
information_schema
System catalogs
this Stack Exchange search or this one.
Remember: Make sure you have good backups before attempting dynamic DML! A mistake can easily destroy all data in the database. Of course, you should have good backups - preferably PITR WAL archiving with PgBarman plus nightly dumps - anyway...