When I call my stored procedure the first time, it executes in a few seconds. When i call it again, the exact same way, after say a minute or 2, it takes around 3-5 minutes! The stored procedure just does a run-of-the-mill Update with some where clauses, nothing extraordinary.
I have seen other questions here, where stored procedures have run slow the first time and faster the subsequent times, so my situation really puzzles me!
I ran the application in hibernate debug mode, and this is where it seems to take a long time, the second time around : You can see there's a 3 minute gap between the first and the second debug statement
........ //binding of other named parameters
17 Sep 2013 17:06:38,287 | DEBUG | org.hibernate.engine.query.NativeSQLQueryPlan [139] | bindNamedParameters() ABC111 -> productName [1]
17 Sep 2013 17:09:39,051 | DEBUG | org.hibernate.jdbc.AbstractBatcher [418] | about to close PreparedStatement (open PreparedStatements: 1, globally: 1)
Here's how i call my stored procedure:
StringBuilder query = new StringBuilder();
query.append("{call ").append(Constants.StoreProcedures.UPDATE_ORDER_STATUS)
.append("(:productName, :prodNumber, ")
//other parameters
.append(")} ");
final String queryStr = query.toString();
Object obj = super.getJpaTemplate().execute(new JpaCallback() {
public Object doInJpa(EntityManager em) throws PersistenceException {
Query query = em.createNativeQuery(queryStr);
query.setParameter("productName", prodBean.getProductName());
query.setParameter("prodNumber", prodBean.getProdNumber());
//other parameters
return query.executeUpdate();
}
});
EDIT:
On sniffing around further, i noticed that between the two calls to the stored proc in question, lets calls it SP1, another stored Proc (SP2) is called in a loop. And i face this problem only if the number of loops is high - over 400. And if i run SP1 after say 5 minutes or so, it seems to run fine - doesn't take so long. (Haven't figured out the actual threshold time gap though, after which the problem doesn't occur.)
Related
I’m newbie to create DB2 for IBM i (AS400) Stored procedure. I'm seeking an answer for what’s wrong with my calling stored procedure from STRSQL.
Any ‘IN’ parameter stored procedures are callable, but ‘OUT’ parameter stored procedures are not.
create procedure egg(out pcount# INT)
language sql
set option dbgview=*source, USRPRF=*USER
begin
set pcount# = 5;
end
I call this,
call egg(?)
Then this error shows up.
SQL0418
Message . . . . : Use of parameter marker not valid.
I want to see the pcount# result, '5', in the line.
Any help would be appreciated.
What you are trying to do will work, but only if you use iNav's Run SQL Scripts query tool..
[ Thu Mar 26 08:50:52 EDT 2015 ] Run Selected
> call egg(?)
Return Code = 0
Output Parameter #1 = 5
Statement ran successfully (0 ms)
Another option if you're on a recent (7.1+) release, is the use of global variables..
create or replace variable myout int default(0)
call egg(myout)
select myout from sysibm.sysdummy1
Note that even in the scenario of using a global variable, iNav's Run SQL Scripts is a better choice as it has a tab you can open to create, update, delete global variables directly.
I have an EntityFramework select statement which return like some 50K rows. i am getting this exception for this simple select command
var db = new DBEntity();
db.CommandTimeout = 350;
var actCosts = (from i in db.Products
where i.productID== productID
select i).ToList();
the database is in Azure. i have connected through SSMS to know the actual time taking to retrieve the rows it takes 4:30 minutes to bring all data. So i set Commandtimeout to like 350 seconds. But it didnt worked
is there any performance differences between the above and this one
var actCosts = db.Products.Where(t => t.productID== productID).ToList();
First try to run a .FirstOrDefault(), see if it returns data in time
var actCosts = (from i in db.Products
where i.productID== productID
select i).FirstOrDefault();
If it works, I suggest setting an even larger Timeout, like 1000, and see if your results are returned then.
I believe SSMS uses other ways of retrieving the data and is probably better optimized for this than a simple .ToList() method.
We have the following code:
var db = new CoreEntityDB();
var abc = new abcDB();
var connection = new DataStore(db.ConnectionStrings.First(p => p.Name == "Abc").Value, DataStore.Server.SqlServer);
var projects = new List<abc_Employees>();
projects.AddRange(abc.Database.SqlQuery<abc_Employees>("usp_ABC_EmployeeSys"));
The project is failing on the following line:
projects.AddRange(abc.Database.SqlQuery<abc_Employees>("usp_ABC_EmployeeSys"));
And the error says: "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding"
Everything was working fine a few days ago, and now, nothing. Nothing's changed either as far as code, or sql stored proc.
Anyone else experienced this before?
Did you try to run SP independently to see if that's the bottle neck?
Is it the command that is timing out?
You can increase the command timeout using:
((IObjectContextAdapter)abc).ObjectContext.CommandTimeout = 180;
You should take a look at your stored procedure. The default timeout is 30 seconds so it looks like it is taking longer for the stored procedure to return results. Increasing the timeout is just treating the symptoms.
I wanted to know if we can timeout a sql query.
In the sense, that suppose I have a sql query whose output is useful only if gives the output in 10 minutes ,after which even if it outputs the results its of no use to me.
What I want to do is that if the query takes more than 10 minutes to do the processing then it should just kind of kill itself.
Is there a possible way to do so??
An example will be pretty helpful.
Let me know if my thoughts are not comprehendible..
Here's what it would look like for the SqlCommand.CommandTimeout
SqlCommand cmd = new SqlCommand();
cmd.CommandText = "SELECT blah FROM Categories ORDER BY CategoryID";
cmd.CommandTimeout = 600; // 10 minutes = 600 seconds
// Caveat: it you need a timeout value of more than 30 - 60 seconds
// perhaps time to look at why it takes so long...
You can set the CommandTimout property of the Command object to 10 minutes. When the command times out, SQL Server will notice that the connection is dropped and cancel the query.
I am writing a stored procedure that when completed will be used to scan staging tables for bogus data on a column by column basis.
Step one in the exercise was just to scan the table --- which is what the code below does. The issue is that this code runs in 5:45 seconds --- however the same code run as a console app (changing the connectionstring of course) runs in about 44 seconds.
using (SqlConnection sqlConnection = new SqlConnection("context connection=true"))
{
sqlConnection.Open();
string sqlText = string.Format("select * from {0}", source_table.Value);
int count = 0;
using (SqlCommand sqlCommand = new SqlCommand(sqlText, sqlConnection))
{
SqlDataReader reader = sqlCommand.ExecuteReader();
while (reader.Read())
count++;
SqlDataRecord record = new SqlDataRecord(new SqlMetaData("rowcount", SqlDbType.Int));
SqlContext.Pipe.SendResultsStart(record);
record.SetInt32(0, count);
SqlContext.Pipe.SendResultsRow(record);
SqlContext.Pipe.SendResultsEnd();
}
}
However the same code (different connection string of course) runs in a console app in about 44 seconds (which is closer to what I was expecting on the client side)
What am I missing on the SP side, that would cause it to run so slow.
Please note: I fully understand that if I wanted a count of rows, I should use the count(*) aggregation --- that's not the purpose of this exercise.
The type of code you are writing is highly susceptible to SQL Injection. Rather than processing the reader like you are, you could just use the RecordsAffected Property to find the number of rows in the reader.
EDIT:
After doing some research, the difference you are seeing is a by design difference between the context connection and a regular connection. Peter Debetta blogged about this and writes:
"The context connection is written such that it only fetches a row at a time, so for each of the 20 million some odd rows, the code was asking for each row individually. Using a non-context connection, however, it requests 8K worth of rows at a time."
http://sqlblog.com/blogs/peter_debetta/archive/2006/07/21/context-connection-is-slow.aspx
Well it would seem the answer is in the connection string after all.
context connection=true
versus
server=(local); database=foo; integrated security=true
For some bizzare reason, using the "external" connection the SP runs almost as fast as a console app (still not as fast mind you! -- 55 seconds)
Of course now the assembly has to be deployed as External rather than Safe --- and that introduces more frustration.