SSRS - Reusing temp table in multiple datasets - sql

I have a report on SSRS which has 14 subreports. All of these subreports read from the same stored procedure but present the data in different ways (because of different calculations). The way I have the SP written is as follows:
IF OBJECT_ID('tempdb.dbo.#blabla') IS NOT NULL
BEGIN DROP TABLE #blabla END
SELECT a,b,c,d,e
INTO #blabla
WHERE a='bla'
IF #type = 1 --report 1
BEGIN
SELECT ....
END
IF #type = 2 --report 2
BEGIN
SELECT .....
END
And so on for each report.
I create 3 temporary tables at the beginning of the stored procedure which are the ones that feed the data to be converted. The problem is that for each sub-report, the tables keep recreating themselves, which causes the report to take a long time to get created. Is there any workaround available that may re-use the tables created at the start of the stored procedure?

Since you are using separate subreports, the queries for each of them won't use the same transaction where the temp tables are. SQL Server drops the tables when the connection for the query is lost.
You could try combining all your subreports into one. They would allow you to use the #TEMP tables for each query if you check the Use Single Transaction box in the Datasource.
Another was would be to use Global Temp Tables - ##TEMP. Global Temp tables do not get dropped automatically and would be able to be used by other subreports.

You can create another table (control table) that contains one row which contains two columns - the start and end dates of the temp table. When each report starts it should check the current date against the dates in the control table. If the dates are not current rebuild the temp tables otherwise just continue processing. If you are creating temp tables that will be shared this way you probably don't want to create temp tables just regular tables.
Another approach is to run a SQL Server job that rebuilds the work table every night at midnight.
By the way you can create indexes on temporary tables and you should seriously consider adding a clustered index to the temp table. You may find that you have much faster results running against such a table even if its a 'small' table.

Related

Update 200 tables in database

I have two databases with a couple hundred tables in them each, in SQL Server. The tables in the two databases are 90% the same, with about 20 different tables in each. I'm working on a stored procedure to update database2 with the data from the tables it shares in database1.
I‌'m thinking truncate the tables and then insert the records from the tables in the other database like:
t‌runcate table database2.dbo.table2
s‌elect *
into data‌‌‌base2.dbo.table2
from database1.dbo.table1
I‌s this the best way to do this, and is there a better way to do it than writing a couple hundred of these statements?‌
This may give an error because the table already exists in Database(As per your truncate command). Given query will create a new table.
"s‌elect *
into data‌‌‌base2.dbo.table2 ---Create new table
from database1.dbo.table1"
If you want the same table structure and Data then you should generate scripts for Schema and data and run that scripts on another database(DB2)
right click the database and select tasks --> generate scripts
Next --> select the requested table/tables (select required tables)
next --> click advanced --> types of data to script = schema and data
also, change check for Existence--True
next and finish

Remove Records from Tables Selectively Based on Variable

Scenario:
I need to write an SQL script which will remove all records from some or all tables in a database with around 100 tables.
Some tables are 'data' tables, some are 'lookup' tables. There is nothing in their names to indicate which they are.
Sometimes I will want the script to only remove records from the 'data' tables, on other occasions I will want to use it to remove data from all tables.
The records have to be removed from the tables in a very specific order to prevent foreign key constraint violations.
My original idea was to create a variable at the start of the script - something like #EmptyLookupTables - which I could set to true or false, and then I could wrap the DELETE statements in an IF... statement so that they were only executed if the value of the variable was true.
However, due to the foreign key constraints I need to include the GO command after just about every DELETE statement, and variables are not persisted across these batches.
How can I write a script which deletes records from my tables in the correct order but skips over certain tables based on the value of a single variable? The database is in Microsoft SQL Server 2016.
The only way I know of doing this without writing a parser for DDL in TSQL is to turn it on its head.
Create a new database with the same schema; populate the lookup tables, but without the records you don't want. Then populate the data tables, but again leave out the records you don't want. Finally, rename or delete the old database, and rename the new database to the original name.
It's still hard, though.
Create a #temp table and store your variable in it, it will persist across GO separated batches. Then just check the temp table inside every batch.
SELECT #EmptyLookupTables AS EmptyLookupTables INTO #tmp
GO
DECLARE #EmptyLookupTables BIT
SELECT #EmptyLookupTables = EmptyLookupTables FROM #tmp
DELETE FROM YourLookupTable WHERE #EmptyLookupTables = 1
GO
or you can even join directly on #temp table in delete command
DELETE l FROM YourLookupTable l
INNER JOIN #tmp t ON t.EmptyLookupTables = 1

Sql server issue dealing with huge volume of data

i have an requirment like this i need to delete all the customer who have not done transaaction for the past 800 days
i have an table customer where customerID is the primary key
*creditcard table have columns customerID,CreditcardID, where creditcard is an primary key*
Transcation table having column transactiondatetime, CreditcardID,CreditcardTransactionID here is the primarary key in this table.
All the transcationtable data is in the view called CreditcardTransaction so i am using the view to get the information
i have written an query to get the creditcard who have done transaction for the past 800 days and get their CreditcardID and store it in table
as the volume of data in CreditcardTransaction view is around 60 millon data the query what i have writeen fails and logs an message log file is full and throws message system out of memory exception.
INSERT INTO Tempcard
SELECT CreditcardID,transactiondatetime
FROM CreditcardTransaction WHERE
DATEDIFF(DAY ,CreditcardTransaction.transactiondatetime ,getdate())>600
As i need to get the CreditcardID when was their last Transactiondatetime
Need to show their data in an Excel sheet so, i am dumping in data in an Table then insert them into Excel.
what is teh best solution i show go ahead here
i am using an SSIS package(vs 2008 R2) where i call an SP dump data into Table then do few business logic finally insert data in to excel sheet.
Thanks
prince
One thought: Using a function in a Where clause can slow things down - considerably. Consider adding a column named IdleTransactionDays. This will allow you to use the DateDiff function in a Select clause. Later, you can query the Tempcard table to return the records with IdleTransactionDays greater than 600 - similar to this:
declare #DMinus600 datetime =
INSERT INTO Tempcard
(CreditcardID,transactiondatetime,IdleTransactionDays)
SELECT CreditcardID,transactiondatetime, DATEDIFF(DAY ,CreditcardTransaction.transactiondatetime ,getdate())
FROM CreditcardTransaction
Select * From Tempcard
Where IdleTransactionDays>600
Hope this helps,
Andy
Currently you're inserting those records row by row. You could create a SSIS package that reads your data with an OLEDB Source component, performs the necessary operations and bulk inserts them (a minimally logged operation) into your destination table.
You could also directly output your rows into an Excel file. Writing rows to an intermediate table decreases performance.
If your source query still times out, investigate if any indexes exist and that they are not too fragmented.
You could also partition your source data by year (based on transactiondatetime). This way the data will be loaded in bursts.

SQL Server 2005: stored procedure to move rows from one table to another

I have 2 tables of identical schemea. I need to move rows older than 90 days (based on a dataTime column present in the table) from table A to table B. Here is the pseudo code for what I want to do
SET #Criteria = getdate()-90
Select * from table A
Where column X<#Criteria
Into table B
--now clean up the records we just moved to table B, in Table A
delete from table A Where column X<#Criteria
My questions are:
What is the most efficient way to do this (will select-in perform well under high volumes)? Table A will have ~180,000,000 rows in it, and will need to move ~4,000,000 rows at a time to table B.
How do I encapsulate this under one transaction so that I will not delete rows from Table A if there was an error inserting them to Table B. I just want to make sure that I don't accidentally delete a row from table A unless I have successfully written it to table B.
Are there any good SQL Server 2005 books that you recommend?
Thanks,
Chris
I think that SSIS is probably the best solution for your needs.
I think you can just use the SSIS tasks like Data Flow task to achieve your needs. There doesnt seem to be any need to create a procedure separately for the logic.
Transactions can be set for any Data Flow task using TransactionOption property. Check out this article as to how to use Transactions in SSIS
Some basic tutorials on SSIS packages and how to create them can be referred to here and here
regarding
How do I encapsulate this under one transaction so that I will not delete rows from Table A if there was an error inserting them to Table B.
you can delete all rows from A that are in B using a join. Then, if the copy to B failed, nothing will be deleted from A.

SQL Server Procedure returns multiple tables - Insert results into tables

I have a procedure that returns multiple tables; eg:
PROCEDURE Something AS
BEGIN
SELECT 1,2,3
SELECT 4,5
SELECT 9,10,11
END
I would like to take each table from the result and insert it into a series of tables/temp tables - one for each record set.
Is this possible?
You could create the temporary tables within the stored proc and push the records into that. If you are using the same session , the table would be available after the stored proc is finished.
Or you could create the temp tables before hand and call the sp to populate them.
if you Union the results together they would come out as one result set.
your second query only has 2 columns but this would need to be resolved either way as you put it into a table.
Check out Multiple Active Resultsets (MARS). It may do what you are looking for.
http://www.sqlteam.com/article/multiple-active-result-sets-mars
http://blogs.msdn.com/sqlprogrammability/archive/2006/05/01/MARSIntroduction1.aspx