Stored procedure for writing back to db2 database - sql

Im new to db2 stored procedure, im approaching here for some help/guidance.
I have users who access cognos for reporting.
Recently I got a requirement from one of our clients for writing back to the db2 table based on user provided comment or input through IBM cognos.
I tried below code in db2 and cognos, but it works half way.
The catch is whenever a user provides a fresh entry it gets stored quickly but whenever a user tries to update the same entry, it takes almost 15-20 mins to refresh that record at table level. I won't understand what i can improve on my code here.
create procedure ngetl.new_update_comment (
in #p_job_status_summary_key integer
,in #p_comment varchar(4000)
,in #p_modified_by varchar(25)
)
dynamic result sets 1
begin
declare e1 cursor with return for
select 1
from ngetl.job_status_summary
where job_status_summary_key = 17076
with ur;
if upper(#p_modified_by) like '%IBM%'
or upper(#p_modified_by) like 'V%' then
update ngetl.job_status_summary
set ibm_comment = #p_comment
,modified_by_ibm = #p_modified_by
,timestamp_ibm = current_timestamp
where job_status_summary_key = #p_job_status_summary_key;
else update ngetl.job_status_summary
set sbi_comment = #p_comment
,modified_by_sbi = #p_modified_by
where job_status_summary_key = #p_job_status_summary_key;
end if;
commit;
open e1;
end

Related

SSIS is hanging during Update with 3 millions of rows

I'm implementing a new method for a warehouse. The new method consist on perform incremental loading between source and destination tables (Insert,Update or Delete).
All the table are working really well, except for 1 table which the Source has more than 3 millions of rows, as you will see in the image below it just start running but never finish.
Probable I'm not doing the update in the correct way or there is another way to do it.
Here are some pictures of my SSIS package:
Highlighted object is where it hangs.
This is the stored procedure I call to update the table:
ALTER PROCEDURE [dbo].[UpdateDim_A]
#ID INT,
#FileDataID INT
,#CategoryID SMALLINT
,#FirstName VARCHAR(50)
,#LastName VARCHAR(50)
,#Company VARCHAR(100)
,#Email VARCHAR(250) AS BEGIN
SET NOCOUNT ON;
BEGIN TRAN
UPDATE DIM_A
SET
[FileDataID] = #FileDataID,
[CategoryID] = #CategoryID,
[FirstName] = #FirstName,
[LastName] = #LastName,
[Company] = #Company,
[Email] = #Email
WHERE PartyID=#ID
COMMIT TRAN; END
Note:
I already tried Dropping the constraint and indexes and changing the recovery mode of the database to simple.
Any help will be appreciate.
After Apply the solution provided by #Prabhat G, this is how my package looks like, running in 39 seconds (avg)!!!
Inside Dim_A DataFlow
Follow these 2 performance enhancers and you'll avoid your bottleneck.
Remove sort transformation. In your source, while fetching the data use order by sql. Reason being, sort takes up all the records in memory before sorting. You don't want that, be it incremental or full load.
In the last step of update, introduce another Staging Table instead of update records oledb command, which will be replica of Dim table. Once all the matching records are inserted in this new staging table, exit the Data flow task and create EXECUTE SQL TASK which will simply UPDATE Dim table based on joining ID/conditions.
Reason for this is, oledb command hits row by row. Always prefer update using Execute SQL Task as its a batch process.
Edit:
As per comments, to update only changed rows in Execute SQL Task, add the conditions in where clause:
eg:
UPDATE x
SET
x.attribute_A = y.attribute_A
,x.attribute_B = y.attribute_B
FROM
DimA x
inner join stg_DimA y
ON x.Id = y.Id
WHERE
(x.Attribute_A <> y.Attribute_A
OR x.Attribute_B <> y.Attribute_B)
So your problem is actually very simple the method you are using is executing that stored procedure for every row returned. If you have 9961(as in your picture) rows to update it will run that statement 9961 sepreate time. Chances are if you are to look at active queries running on SQL server you'll see that procedure executing over and over.
What you should do to speed this up is dump that data into a staging table then use the execute SQL task further in your package to run a standard SQL update. This will run much faster.
The problem is that you are trying to execute a stored procedure within the data flow. The correct SqlCommand will be an explicit UPDATE query and then map the columns from SSIS to the columns on the table that you are updating.
UPDATE DIM_A
SET FileDataID = ?
,CategoryID = ?
,FirstName = ?
,LastName = ?
,Company = ?
,Email = ?
WHERE PartyID = ?
Note: The #Id needs to be included as a column in your data flow.
One final thing you should consider, as Zane correctly pointed out: you should only update rows that have changed. So, in your data flow you should add a Conditional Split transformation that checks to see if any of the columns in the new source row are different from the existing table rows. Only rows that are different should be send to the OLE DB Command - the rest can be disregarded.

TSQL query to end user

I have written a parameterized transact SQL query for a member of our finance department and several times during the month I run it and copy the raw output with headers into excel for him. Now that department is being regionalised and I've got several finance departments all wanting the same thing.
I know that SSRS will be deployed eventually but our infrastructure team are building a new environment and don't want any new installations in the 'old world' for the moment.
I just need a way to give select individuals access to run that parameterized query against the database. I had thought about turning the query into a view and creating logins for their network accounts with access only to that view but I don't think you can use parameters with views. I wondered if there is a simple interface that can allow them to enter parameters against a stored query or view without using SSRS. It seems so simple but I'm not having much luck finding out.
Sorry if this is a stupid question but I've just moved from server admin to a DBA role and I've only just scratched the surface!
Create a view and called that in SP with Parameter:-
Sample would be
Create View [dbo].[vw_sampleView]
AS
BEGIN
SELECT * FROM tblSample
END
CREATE PROC [dbo].[proc_GetData]
#id int
AS
BEGIN
SELECT * FROM vw_sampleView where id= #id
END
Then this SP retunred filtered data. Grant the permission to execute this SP to different users.
GRANT EXECUTE ON [dbo].[proc_GetData] TO [user_logins]
You can create a UDF which will return a table based on the parameters. For example:
CREATE FUNCTION [dbo].[fnt_myfunction]( #id INT)
RETURNS TABLE
AS
RETURN (
SELECT *
FROM myTable
WHERE id = #id
);
DECLARE #id INT = 1;
SELECT * FROM [dbo].[fnt_myfunction](#id);
Hope that helps.

Stored Procedures and Triggers in data base

what do Stored Procedures and Triggers in data base mean ?
how can i create Stored Procedures ?
how can i crest Triggers ?
if you have simple examples for each of these .please help :)
what i know is only about trigger which is activated if an action of(insert or delete or update ) violates the constrains specified but i don't know how to create ,so again if any have example please
Think of a Stored Procedure as a method in your code. It runs a specific set of instructions.
Stored Procedures are created to, for example, manage complex sets of data that would normally be a pain to handle along in your code.
You can create a Stored Procedure with the following instructions:
Oracle
CREATE OR REPLACE PROCEDURE P_PROCEDURE_NAME (
pParameter1 NUMBER
, pParameter2 VARCHAR2(100 Bytes)
) AS
BEGIN
-- Procedure code here...
END;
SQL Server
CREATE PROCEDURE cspProcedureName
#parameter1 int
, #parameter2 nvarchar(100)
AS
-- Procedure code here...
Oracle
As for the Triggers, they are sets of code called upon an action occuring to the related table. For instance, in Oracle, there are no INDENTITY columns such as SQL Server offers. Instead, Sequences are used along with Triggers to simulate the same. Hence, you will need to create an Oracle SEQUENCE, then the TRIGGER to update the ID field of your table.
CREATE SEQUENCE SEQ_CUSTOMERS
MINVALUE 1
MAXVALUE 65535
START WITH 1
INCREMENT BY 1;
CREATE OR REPLACE TRIGGER TRG_CUSTOMERS_INSERT
BEFORE INSERT
ON TBL_CUSTOMERS
FOR EACH ROW
BEGIN
:NEW.CUST_ID := SEQ_CUSTOMERS.NEXTVAL;
END;
SQL Server
A trigger example in SQL Server would be updating automatically the update datetime of a record. Consider the following:
CREATE TABLE Customers (
CustId int NOT NULL IDENTITY(1, 1) PRIMARY KEY
, CustName nvarchar(100) NOT NULL
, CreatedOn datetime DEFAULT GETDATE()
, LastUpdate datetime NOT NULL
)
GO
CREATE TRIGGER trgCustomersUpdt
AFTER UPDATE
ON Customers
AS
update Customers
set LastUpdate = GETDATE()
where CustId = inserted.Custid
GO
DISCLAIMER
This code has not been tested and may require minor changes for it to work properly against its respective RDBMS.
To sum it up, Triggers are mainly used to as illustrated here, despite there are many other possible use, such as building up an history of table changes that occured throught time, keeping all records of transactions into an history table or the like. The Stored Procedures are mainly used to perform complex database tasks where this would get too complex to do in code.

SELECT FROM stored procedure?

If I have a stored proc in SQL Server 2008, I know I can run it from management studio like so:
exec rpt_myproc #include_all = 1, #start_date = '1/1/2010'
But I'm using an ad-hoc query tool that wasn't returning any results. So I asked it to give me the SQL it was running and it returns this:
SELECT DISTINCT TOP 100000
[dbo].[rpt_myproc].[company_name] AS 'company name',
[dbo].[rpt_myproc].[order_number] AS 'order number]
FROM [dbo].[rpt_myproc]
WHERE
([dbo].[rpt_myproc].[PARAM_start_date] IN ('1/1/2010'))
AND ([dbo].[rpt_myproc].[PARAM_include_all] IN ('1'))
I'm not familiar with that syntax. Is that even possible? The ad-hoc tool isn't failing, but it may be swallowing that error. Then again, maybe it's just giving me a shorthand which it will use translate to the proper syntax later. But if so, why would it give it to me in this form?
I can't seem to get that SQL to execute in Management Studio, so I was wondering if something like that were possible?
I understand that this is more than 3 years old, but in case anybody else is looking for an answer to this question. I had to deal with this reporting platform, Izenda, and have found that stored procedures are treated differently than the output from the "sql" icon. Here is what happens when you select sp as data source
A dynamic sql is build
It creates a two temporary tables with all of the columns that your sp is returning
The first temp table is populated with the result from your stored procedure
The second temp table is populated with the result plus the value of your input parameter.
A statement is created that queries these two temporary tables
Please note that if you don't feed it a parameter it will execute with a default value of empty string '' which will most likely return no data.
In my opinion, horrible idea to handle stored procs which is a good reason why we are planning to drop them for some other reporting solution.
You can insert the first result set of a stored procedure into a temporary table:
SELECT *
INTO #YourProc
FROM OPENROWSET('SQLNCLI',
'server=SERVERNAME\INSTANCENAME;trusted_connection=yes',
'set fmtonly off; exec rpt_myproc')
There's like 3 ways to do this, see this blog post. If you know the output beforehand, you can do it without the remote query.
What tool are you using? You should be able to specify the query type (i.e. SQL, or stored proc, etc)
Haven't used that tool before but a quick google came up with this example (not sure if it will help you)
Using a stored procedure in 5.x
This example uses a stored procedure to populate a table before report design or execution. As shown in the comments, the table StoredProcResults must already exist. Every time a report is created or viewed this stored procedure will update the results of the StoredProcResults table. For 6.x follow these instructions but treat the SP as a regular datasource.
// Customize a report on the fly prior to execution on a per user basis
public override void PreExecuteReportSet(Izenda.AdHoc.ReportSet reportSet){
/*this sample uses the adventure works database Here is the definition of the table and stored procedure created for this report.
CREATE TABLE [dbo].[StoredProcResults](
[ProductID] [int] NOT NULL,
[OrderQuantity] [int] NOT NULL,
[Total] [int] NOT NULL,
[DueDate] [smalldatetime] NOT NULL
) ON [PRIMARY]
CREATE PROCEDURE DoCustomAction (
#date1 as smalldatetime,
#date2 as smalldatetime
) AS
BEGIN
insert into StoredProcResults
select ProductID,OrderQty,LineTotal,ModifiedDate
from Sales.SalesOrderDetail
where ModifiedDate >= #date1 and ModifiedDate <= #date2
END
*/
string currentReportName = HttpContext.Current.Request.QueryString["rn"];
if (currentReportName == "StoredProcExample") {
SqlConnection myConnection = new SqlConnection(Izenda.AdHoc.AdHocSettings.SqlServerConnectionString);
SqlCommand myCommand = new SqlCommand("DoCustomAction", myConnection);
// Mark the Command as a SPROC
myCommand.CommandType = System.Data.CommandType.StoredProcedure;
// Add Parameters to SPROC
SqlParameter parameterdate1 = new SqlParameter("#date1", System.Data.SqlDbType.SmallDateTime);
parameterdate1.Value = "1/1/2003";
myCommand.Parameters.Add(parameterdate1);
SqlParameter parameterdate2 = new SqlParameter("#date2", System.Data.SqlDbType.SmallDateTime);
parameterdate2.Value = "12/31/2003";
myCommand.Parameters.Add(parameterdate2);
try{
myConnection.Open();
myCommand.ExecuteNonQuery();
}
finally{
myConnection.Close();
}
}
}
Are you sure it is a sproc? I've never heard or seen a usage of doing a direct select from a sproc.
What I have seen that works and functions exactly as your code seems to be working is table-valued functions, which are functions, that can take parameters and return a "SELECT FROMable" table just like this (in essence giving you a 'parameterized' view).

Determine caller within stored proc or trigger

I am working with an insert trigger within a Sybase database. I know I can access the ##nestlevel to determine whether I am being called directly or as a result of another trigger or procedure.
Is there any way to determine, when the nesting level is deeper than 1, who performed the action causing the trigger to fire?
For example, was the table inserted to directly, was it inserted into by another trigger and if so, which one.
As far as I know, this is not possible. Your best bet is to include it as a parameter to your stored procedure(s). As explained here, this will also make your code more portable since any method used would likely rely on some database-specific call. The link there was specific for SQL Server 2005, not Sybase, but I think you're pretty much in the same boat.
I've not tested this myself, but assuming you are using Sybase ASE 15.03 or later, have your monitoring tables monProcessStatement and monSysStatement enabled, and appropriate permissions set to allow them to be accessed from your trigger you could try...
declare #parent_proc_id int
if ##nestlevel > 1
begin
create table #temp_parent_proc (
procId int,
nestLevel int,
contextId int
)
insert into #temp_parent_proc
select mss.ProcedureID,
mss.ProcNestLevel,
mss.ContextID
from monSysStatement mss
join monProcessStatement mps
on mss.KPID = mps.KPID
and mss.BatchID = mps.BatchID
and mss.SPID = mps.SPID
where mps.ProcedureID =##procid
and mps.SPID = ##spid
select #parent_proc_id = (select tpp.procId
from #temp_parent_proc tpp,
#temp_parent_proc2 tpp2
where tpp.nestLevel = tpp2.nestLevel-1
and tpp.contextId < tpp2.contextId
and tpp2.procId = ##procid
and tpp2.nestLevel = ##nestlevel
group by tpp.procId, tpp.contextId
having tpp.contextId = max(tpp.contextId ))
drop table #temp_parent_proc
end
The temp table is required because of the nature of monProcessStatement and monSysStatement.
monProcessStatement is transient and so if you reference it more than once, it may no longer hold the same rows.
monSysStatement is a historic table and is guaranteed to only return an individual rown once to any process accessing it.
if you do not have or want to set permissions to access the monitoring tables, you could put this into a stored procedure you pass ##procid, ##spid, and ##nestlevel to as parameters.
If this also isn't an option, since you cannot pass parameters into triggers, another possible work around would be to use a temporary table.
in each proc that might trigger this...
create table #trigger_parent (proc_id int)
insert into #trigger_parent ##procid
then in your trigger the temp table will be available...
if object_id('#trigger_parent') is not null
set #parent_proc = select l proc_id from #trigger_parent
you will know it was triggered from within another proc.
The trouble with this is it doesn't 'just work'. You have to enforce temp table setup.
You could do further checking to find cases where there is no #trigger_parent but the nesting level > 1 and combine a similar query to the monitoring tables as above to find potential candidates that would need to be updated.