I have two databases with a couple hundred tables in them each, in SQL Server. The tables in the two databases are 90% the same, with about 20 different tables in each. I'm working on a stored procedure to update database2 with the data from the tables it shares in database1.
I'm thinking truncate the tables and then insert the records from the tables in the other database like:
truncate table database2.dbo.table2
select *
into database2.dbo.table2
from database1.dbo.table1
Is this the best way to do this, and is there a better way to do it than writing a couple hundred of these statements?
This may give an error because the table already exists in Database(As per your truncate command). Given query will create a new table.
"select *
into database2.dbo.table2 ---Create new table
from database1.dbo.table1"
If you want the same table structure and Data then you should generate scripts for Schema and data and run that scripts on another database(DB2)
right click the database and select tasks --> generate scripts
Next --> select the requested table/tables (select required tables)
next --> click advanced --> types of data to script = schema and data
also, change check for Existence--True
next and finish
Related
Scenario: We have a number of scheduled queries that copy data into a project that we use as our centralized data warehouse. These are scheduled queries are configured to run nightly, and are set to WRITE_TRUNCATE.
Problem: We added descriptions to the columns in several of our destination tables in order to document them. However, when the scheduled queries ran they removed all of the column descriptions. (Table description was maintained.)
Desired Outcome: Is there a way to insert the column descriptions as part of the scheduled queries, or some other way to avoid having these deleted nightly? Or is that simply a limitation of WRITE_TRUNCATE scheduled queries?
I've searched Google & Stack Overflow, and reviewed the documentation, but I can't find any references to table / column descriptions in relation to scheduled queries.
One solution is instead of using WRITE_TRUNCATE with SELECT, you can use:
CREATE OR REPLACE TABLE( <column_list_with_description>)
AS SELECT ...
If you don't want to repeat the column description in every schedule query, you may use:
DELETE FROM table WHERE true;
INSERT INTO table SELECT ...
If the atomacy of the update is required, above query could be written into one MERGE statement like:
MERGE full_table
USING (
SELECT *
FROM data_updates_table
)
ON FALSE
WHEN NOT MATCHED BY SOURCE THEN DELETE
WHEN NOT MATCHED BY TARGET THEN INSERT ROW
I have a report on SSRS which has 14 subreports. All of these subreports read from the same stored procedure but present the data in different ways (because of different calculations). The way I have the SP written is as follows:
IF OBJECT_ID('tempdb.dbo.#blabla') IS NOT NULL
BEGIN DROP TABLE #blabla END
SELECT a,b,c,d,e
INTO #blabla
WHERE a='bla'
IF #type = 1 --report 1
BEGIN
SELECT ....
END
IF #type = 2 --report 2
BEGIN
SELECT .....
END
And so on for each report.
I create 3 temporary tables at the beginning of the stored procedure which are the ones that feed the data to be converted. The problem is that for each sub-report, the tables keep recreating themselves, which causes the report to take a long time to get created. Is there any workaround available that may re-use the tables created at the start of the stored procedure?
Since you are using separate subreports, the queries for each of them won't use the same transaction where the temp tables are. SQL Server drops the tables when the connection for the query is lost.
You could try combining all your subreports into one. They would allow you to use the #TEMP tables for each query if you check the Use Single Transaction box in the Datasource.
Another was would be to use Global Temp Tables - ##TEMP. Global Temp tables do not get dropped automatically and would be able to be used by other subreports.
You can create another table (control table) that contains one row which contains two columns - the start and end dates of the temp table. When each report starts it should check the current date against the dates in the control table. If the dates are not current rebuild the temp tables otherwise just continue processing. If you are creating temp tables that will be shared this way you probably don't want to create temp tables just regular tables.
Another approach is to run a SQL Server job that rebuilds the work table every night at midnight.
By the way you can create indexes on temporary tables and you should seriously consider adding a clustered index to the temp table. You may find that you have much faster results running against such a table even if its a 'small' table.
I am querying a vendors database that has data that is sharded between multiple tables. The tables are named Events_1, Events_2, Events_3, Events_4 etc.
It appears that a new table is automatically created when the table hits 10,000 records. I am looking to write a query that would be able union the results from all the tables (without having to manually add the tables as they are created) as well as a query to only look at the 2 newest tables.
Any suggestions on the best way to go about this?
The database is Microsoft SQL Server 2008
You can do this with dynamic SQL such that you query sysobjects for table name Events_* and build a SELECT UNION from the results.
Better, though perhaps more complicated, would be a DDL trigger that would dynamically update you UNION VIEW to append the new table whenever a table Events_* was created.
This is my problem:
I have an old database, with no constraints whatsoever. There are a handful of tables I copy from the old database to the new database. This copy is simple, and I'm running that daily at night in a Job.
In my new database (nicely with constraints) I made all these loose tables in constraint with a main table. all these tables have a key made of 3ID's and a string.
My main table would translate these 3ID's and a string to 1 ID, so this table would have 5 columns.
In the loose tables, some records can be double, so to insert the ID's in the main table I'd take a distinct of the 3 ID's and a string, and insert those into my main table.
The tables in the old database are updated daily. the copy is run daily, and from the main table I'd like to make a one-to-many relation with the copied tables.
This gives me the problem:
how do I update the main table, do nothing with the already-inserted keys and add the new keys? Don't remove old keys if removed in old database.
I was thinking to make a distinct view of all keys in the old database, but how would I update this to the main table? This would need to run before the daily copy of the other tables (or it would fail on the constraints)
One other idea is to run this update of the main table in Linq-to-SQL in my website but that doesn't seem very clean.
So in short:
Old DB is SQL Server 2000
New DB is SQL Server 2008
Old db has no constraints, copy of some tables happens daily.
There should be a Main table, translating the 3ID and 1string key to a 1ID key, with a constraint to the other tables.
The main table should be updated before the copy job, else the constraints will fail. The main table will be a distinct of a few columns of 1 table. This distinct will be in a view on the old db.
Only new rows can be added in the main db
Does anyone have some ideas, some guidance?
Visualize the DB:
these loose tables are details about a company. 1 table has address(es) 1 table has contact person, another table has it's username and login for our system (which could be more than 1 for 1 company)
a company is identified by the 3ID's and 1string. the main table would list these unique ID's and string so that they could be translated to 1 ID. this 1 ID is then used in the rest of my DB. the one to many relation would then be made from the main table to all those loose tables. I hope this clears it up a bit :)
I think you could use EXCEPT to insert the ids that aren't in your main table yet http://msdn.microsoft.com/en-us/library/ms188055.aspx
So for example:
insert into MainTable
select Id1,Id2,Id3,String1,NewId from DistinctOldTable
except
select Id1,Id2,Id3,String1,NewId from MainTable
I have 2 tables of identical schemea. I need to move rows older than 90 days (based on a dataTime column present in the table) from table A to table B. Here is the pseudo code for what I want to do
SET #Criteria = getdate()-90
Select * from table A
Where column X<#Criteria
Into table B
--now clean up the records we just moved to table B, in Table A
delete from table A Where column X<#Criteria
My questions are:
What is the most efficient way to do this (will select-in perform well under high volumes)? Table A will have ~180,000,000 rows in it, and will need to move ~4,000,000 rows at a time to table B.
How do I encapsulate this under one transaction so that I will not delete rows from Table A if there was an error inserting them to Table B. I just want to make sure that I don't accidentally delete a row from table A unless I have successfully written it to table B.
Are there any good SQL Server 2005 books that you recommend?
Thanks,
Chris
I think that SSIS is probably the best solution for your needs.
I think you can just use the SSIS tasks like Data Flow task to achieve your needs. There doesnt seem to be any need to create a procedure separately for the logic.
Transactions can be set for any Data Flow task using TransactionOption property. Check out this article as to how to use Transactions in SSIS
Some basic tutorials on SSIS packages and how to create them can be referred to here and here
regarding
How do I encapsulate this under one transaction so that I will not delete rows from Table A if there was an error inserting them to Table B.
you can delete all rows from A that are in B using a join. Then, if the copy to B failed, nothing will be deleted from A.