SQL Parameter not found - sql

The error:
Procedure or Function '' expects parameter '#Param1' which was not supplied.
An excerpt of the stored procedure:
I have a stored procedure on a SQL Server 2012. The procedure looks something like this...
SELECT *
FROM Orders
WHERE Orders.CustomerID = #param1 AND
Orders.CustomerJoinDate = #param2
I call it from my code {Using Visual Studio 2008} like so...
The calling method in Visual Studios:
First I create an array of parameters I'm going to pass...
Dim Param() As SqlClient.SqlParameter =
New SqlClient.SqlParameter()
{
New SqlClient.SqlParameter("#Param1", Me.cmbFilter1.Text),
New SqlClient.SqlParameter("#Param2", Me.cmbFilter2.Text)
}
Then I loop through the parameters and add them to a command to be executed by a datareader.
The mSQLCmd is set to call the stored procedure described above...
mSQLCmd.Parameters.Clear()
mSQLCmd.CommandText = SQLCmd
For Each sParam As SqlClient.SqlParameter In Param
mSQLCmd.Parameters.AddWithValue(sParam.ParameterName, sParam.Value)
Next
Error occurs when:
I try to run mSQL.ExecuteReader() Can someone point me in the right direction on this? I've verified that each parameter is included in the Param() with the correct values.
I've also tested the stored procedure on SQL Server and verified when the two necessary parameters are provided it works correctly. Something is wrong on the vb side.

If you're calling a stored procedure, you need to set the CommandType of the SqlCommand accordingly!
mSQLCmd.CommandText = SQLCmd
// add this line!
mSQLCmd.CommandType = CommandType.StoredProcedure
Otherwise the name of the stored procedure you're trying to call is interpreted as a SQL command you're trying to execute.

Related

SQL Server connection context using temporary table cannot be used in stored procedures called with SqlDataAdapter.Fill

I want to have some information available for any stored procedure, such as current user. Following the temporary table method indicated here, I have tried the following:
1) create temporary table when connection is opened
private void setConnectionContextInfo(SqlConnection connection)
{
if (!AllowInsertConnectionContextInfo)
return;
var username = HttpContext.Current?.User?.Identity?.Name ?? "";
var commandBuilder = new StringBuilder($#"
CREATE TABLE #ConnectionContextInfo(
AttributeName VARCHAR(64) PRIMARY KEY,
AttributeValue VARCHAR(1024)
);
INSERT INTO #ConnectionContextInfo VALUES('Username', #Username);
");
using (var command = connection.CreateCommand())
{
command.Parameters.AddWithValue("Username", username);
command.ExecuteNonQuery();
}
}
/// <summary>
/// checks if current connection exists / is closed and creates / opens it if necessary
/// also takes care of the special authentication required by V3 by building a windows impersonation context
/// </summary>
public override void EnsureConnection()
{
try
{
lock (connectionLock)
{
if (Connection == null)
{
Connection = new SqlConnection(ConnectionString);
Connection.Open();
setConnectionContextInfo(Connection);
}
if (Connection.State == ConnectionState.Closed)
{
Connection.Open();
setConnectionContextInfo(Connection);
}
}
}
catch (Exception ex)
{
if (Connection != null && Connection.State != ConnectionState.Open)
Connection.Close();
throw new ApplicationException("Could not open SQL Server Connection.", ex);
}
}
2) Tested with a procedure which is used to populate a DataTable using SqlDataAdapter.Fill, by using the following function:
public DataTable GetDataTable(String proc, Dictionary<String, object> parameters, CommandType commandType)
{
EnsureConnection();
using (var command = Connection.CreateCommand())
{
if (Transaction != null)
command.Transaction = Transaction;
SqlDataAdapter adapter = new SqlDataAdapter(proc, Connection);
adapter.SelectCommand.CommandTimeout = CommonConstants.DataAccess.DefaultCommandTimeout;
adapter.SelectCommand.CommandType = commandType;
if (Transaction != null)
adapter.SelectCommand.Transaction = Transaction;
ConstructCommandParameters(adapter.SelectCommand, parameters);
DataTable dt = new DataTable();
try
{
adapter.Fill(dt);
return dt;
}
catch (SqlException ex)
{
var err = String.Format("Error executing stored procedure '{0}' - {1}.", proc, ex.Message);
throw new TptDataAccessException(err, ex);
}
}
}
3) called procedure tries to get the username like this:
DECLARE #username VARCHAR(128) = (select AttributeValue FROM #ConnectionContextInfo where AttributeName = 'Username')
but #ConnectionContextInfo is no longer available in the context.
I have put a SQL profiler against the database, to check what is happening:
temporary table is created successfully using a certain SPID
procedure is called using the same SPID
Why is temporary table not available within the procedure scope?
In T-SQL doing the following works:
create a temporary table
call a procedure that needs data from that particular temporary table
temporary table is dropped only explicitly or after current scope ends
Thanks.
As was shown in this answer, ExecuteNonQuery uses sp_executesql when CommandType is CommandType.Text and command has parameters.
The C# code in this question doesn't set the CommandType explicitly and it is Text by default, so most likely end result of the code is that CREATE TABLE #ConnectionContextInfo is wrapped into sp_executesql. You can verify it in the SQL Profiler.
It is well-known that sp_executesql is running in its own scope (essentially it is a nested stored procedure). Search for "sp_executesql temp table". Here is one example: Execute sp_executeSql for select...into #table but Can't Select out Temp Table Data
So, a temp table #ConnectionContextInfo is created in the nested scope of sp_executesql and is automatically deleted as soon as sp_executesql returns.
The following query that is run by adapter.Fill doesn't see this temp table.
What to do?
Make sure that CREATE TABLE #ConnectionContextInfo statement is not wrapped into sp_executesql.
In your case you can try to split a single batch that contains both CREATE TABLE #ConnectionContextInfo and INSERT INTO #ConnectionContextInfo into two batches. The first batch/query would contain only CREATE TABLE statement without any parameters. The second batch/query would contain INSERT INTO statement with parameter(s).
I'm not sure it would help, but worth a try.
If that doesn't work you can build one T-SQL batch that creates a temp table, inserts data into it and calls your stored procedure. All in one SqlCommand, all in one batch. This whole SQL will be wrapped in sp_executesql, but it would not matter, because the scope in which temp table is created will be the same scope in which stored procedure is called. Technically it will work, but I wouldn't recommend following this path.
Here is not an answer to the question, but suggestion to solve the problem.
To be honest, the whole approach looks strange. If you want to pass some data into the stored procedure why not use parameters of this stored procedure. This is what they are for - to pass data into the procedure. There is no real need to use temp table for that. You can use a table-valued parameter (T-SQL, .NET) if the data that you are passing is complex. It is definitely an overkill if it is simply a Username.
Your stored procedure needs to be aware of the temp table, it needs to know its name and structure, so I don't understand what's the problem with having an explicit table-valued parameter instead. Even the code of your procedure would not change much. You'd use #ConnectionContextInfo instead of #ConnectionContextInfo.
Using temp tables for what you described makes sense to me only if you are using SQL Server 2005 or earlier, which doesn't have table-valued parameters. They were added in SQL Server 2008.
MINOR ISSUE: I am going to assume for the moment that the code posted in the Question isn't the full piece of code that is running. Not only are there variables used that we don't see getting declared (e.g. AllowInsertConnectionContextInfo), but there is a glaring omission in the setConnectionContextInfo method: the command object is created but never is its CommandText property set to commandBuilder.ToString(), hence it appears to be an empty SQL batch. I'm sure that this is actually being handled correctly since 1) I believe submitting an empty batch will generate an exception, and 2) the question does mention that the temp table creation appears in the SQL Profiler output. Still, I am pointing this out as it implies that there could be additional code that is relevant to the observed behavior that is not shown in the question, making it more difficult to give a precise answer.
THAT BEING SAID, as mentioned in #Vladimir's fine answer, due to the query running in a sub-process (i.e. sp_executesql), local temporary objects -- tables and stored procedures -- do not survive the completion of that sub-process and hence are not available in the parent context.
Global temporary objects and permanent/non-temporary objects will survive the completion of the sub-process, but both of those options, in their typical usage, introduce concurrency issues: you would need to test for the existence first before attempting to create the table, and you would need a way to distinguish one process from another. So these are not really a great option, at least not in their typical usage (more on that later).
Assuming that you cannot pass in any values into the Stored Procedure (else you could simply pass in the username as #Vladimir suggested in his answer), you have a few options:
The easiest solution, given the current code, would be to separate the creation of the local temporary table from the INSERT command (also mentioned in #Vladimir's answer). As previously mentioned, the issue you are encountering is due to the query running within sp_executesql. And the reason sp_executesql is being used is to handle the parameter #Username. So, the fix could be as simple as changing the current code to be the following:
string _Command = #"
CREATE TABLE #ConnectionContextInfo(
AttributeName VARCHAR(64) PRIMARY KEY,
AttributeValue VARCHAR(1024)
);";
using (var command = connection.CreateCommand())
{
command.CommandText = _Command;
command.ExecuteNonQuery();
}
_Command = #"
INSERT INTO #ConnectionContextInfo VALUES ('Username', #Username);
");
using (var command = connection.CreateCommand())
{
command.CommandText = _Command;
// do not use AddWithValue()!
SqlParameter _UserName = new SqlParameter("#Username", SqlDbType.NVarChar, 128);
_UserName.Value = username;
command.Parameters.Add(_UserName);
command.ExecuteNonQuery();
}
Please note that temporary objects -- local and global -- cannot be accessed in T-SQL User-Defined Functions or Table-Valued Functions.
A better solution (most likely) would be to use CONTEXT_INFO, which is essentially session memory. It is a VARBINARY(128) value but changes to it survive any sub-process since it is not an object. Not only does this remove the current complication you are facing, but it also reduces tempdb I/O considering that you are creating and dropping a temporary table each time this process runs, and doing an INSERT, and all 3 of those operations are written to disk twice: first in the Transaction Log, then in the data file. You can use this in the following manner:
string _Command = #"
DECLARE #User VARBINARY(128) = CONVERT(VARBINARY(128), #Username);
SET CONTEXT_INFO #User;
";
using (var command = connection.CreateCommand())
{
command.CommandText = _Command;
// do not use AddWithValue()!
SqlParameter _UserName = new SqlParameter("#Username", SqlDbType.NVarChar, 128);
_UserName.Value = username;
command.Parameters.Add(_UserName);
command.ExecuteNonQuery();
}
And then you get the value within the Stored Procedure / User-Defined Function / Table-Valued Function / Trigger via:
DECLARE #Username NVARCHAR(128) = CONVERT(NVARCHAR(128), CONTEXT_INFO());
That works just fine for a single value, but if you need multiple values, or if you are already using CONTEXT_INFO for another purpose, then you either need to go back to one of the other methods described here, OR, if using SQL Server 2016 (or newer), you can use SESSION_CONTEXT, which is similar to CONTEXT_INFO but is a HashTable / Key-Value pairs.
Another benefit of this approach is that CONTEXT_INFO (at least, I haven't yet tried SESSION_CONTEXT) is available in T-SQL User-Defined Functions and Table-Valued Functions.
Finally, another option would be to create a global temporary table. As mentioned above, global objects have the benefit of surviving sub-processes, but they also have the drawback of complicating concurrency. A seldom-used to get the benefit without the drawback is to give the temporary object a unique, session-based name, rather than add a column to hold a unique, session-based value. Using a name that is unique to the session removes any concurrency issues while allowing you to use an object that will get automatically cleaned up when the connection is closed (so no need to worry about a process that creates a global temporary table and then runs into an error before completing, whereas using a permanent table would require cleanup, or at least an existence check at the beginning).
Keeping in mind the restriction that we cannot pass any value into the Stored Procedure, we need to use a value that already exists at the data layer. The value to use would be the session_id / SPID. Of course, this value does not exist in the app layer, so it has to be retreived, but there was no restriction placed on going in that direction.
int _SessionId;
using (var command = connection.CreateCommand())
{
command.CommandText = #"SET #SessionID = ##SPID;";
SqlParameter _paramSessionID = new SqlParameter("#SessionID", SqlDbType.Int);
_paramSessionID.Direction = ParameterDirection.Output;
command.Parameters.Add(_UserName);
command.ExecuteNonQuery();
_SessionId = (int)_paramSessionID.Value;
}
string _Command = String.Format(#"
CREATE TABLE ##ConnectionContextInfo_{0}(
AttributeName VARCHAR(64) PRIMARY KEY,
AttributeValue VARCHAR(1024)
);
INSERT INTO ##ConnectionContextInfo_{0} VALUES('Username', #Username);", _SessionId);
using (var command = connection.CreateCommand())
{
command.CommandText = _Command;
SqlParameter _UserName = new SqlParameter("#Username", SqlDbType.NVarChar, 128);
_UserName.Value = username;
command.Parameters.Add(_UserName);
command.ExecuteNonQuery();
}
And then you get the value within the Stored Procedure / Trigger via:
DECLARE #Username NVARCHAR(128),
#UsernameQuery NVARCHAR(4000);
SET #UsernameQuery = CONCAT(N'SELECT #tmpUserName = [AttributeValue]
FROM ##ConnectionContextInfo_', ##SPID, N' WHERE [AttributeName] = ''Username'';');
EXEC sp_executesql
#UsernameQuery,
N'#tmpUserName NVARCHAR(128) OUTPUT',
#Username OUTPUT;
Please note that temporary objects -- local and global -- cannot be accessed in T-SQL User-Defined Functions or Table-Valued Functions.
Finally, it is possible to use a real / permanent (i.e. non-temporary) Table, provided that you include a column to hold a value specific to the current session. This additional column will allow for concurrent operations to work properly.
You can create the table in tempdb (yes, you can use tempdb as a regular DB, doesn't need to be only temporary objects starting with # or ##). The advantages of using tempdb is that the table is out of the way of everything else (it is just temporary values, after all, and doesn't need to be restored, so tempdb using SIMPLE recovery model is perfect), and it gets cleaned up when the Instance restarts (FYI: tempdb is created brand new as a copy of model each time SQL Server starts).
Just like with Option #3 above, we can again use the session_id / SPID value since it is common to all operations on this Connection (as long as the Connection remains open). But, unlike Option #3, the app code doesn't need the SPID value: it can be inserted automatically into each row using a Default Constraint. This simplies the operation a little.
The concept here is to first check to see if the permanent table in tempdb exists. If it does, then make sure that there is no entry already in the table for the current SPID. If it doesn't, then create the table. Since it is a permanent table, it will continue to exist, even after the current process closes its Connection. Finally, insert the #Username parameter, and the SPID value will populate automatically.
// assume _Connection is already open
using (SqlCommand _Command = _Connection.CreateCommand())
{
_Command.CommandText = #"
IF (OBJECT_ID(N'tempdb.dbo.Usernames') IS NOT NULL)
BEGIN
IF (EXISTS(SELECT *
FROM [tempdb].[dbo].[Usernames]
WHERE [SessionID] = ##SPID
))
BEGIN
DELETE FROM [tempdb].[dbo].[Usernames]
WHERE [SessionID] = ##SPID;
END;
END;
ELSE
BEGIN
CREATE TABLE [tempdb].[dbo].[Usernames]
(
[SessionID] INT NOT NULL
CONSTRAINT [PK_Usernames] PRIMARY KEY
CONSTRAINT [DF_Usernames_SessionID] DEFAULT (##SPID),
[Username] NVARCHAR(128) NULL,
[InsertTime] DATETIME NOT NULL
CONSTRAINT [DF_Usernames_InsertTime] DEFAULT (GETDATE())
);
END;
INSERT INTO [tempdb].[dbo].[Usernames] ([Username]) VALUES (#UserName);
";
SqlParameter _UserName = new SqlParameter("#Username", SqlDbType.NVarChar, 128);
_UserName.Value = username;
command.Parameters.Add(_UserName);
_Command.ExecuteNonQuery();
}
And then you get the value within the Stored Procedure / User-Defined Function / Table-Valued Function / Trigger via:
SELECT [Username]
FROM [tempdb].[dbo].[Usernames]
WHERE [SessionID] = ##SPID;
Another benefit of this approach is that permanent tables are accessible in T-SQL User-Defined Functions and Table-Valued Functions.
"There are two types of temporary tables: local and global. They differ from each other in their names, their visibility, and their availability. Local temporary tables have a single number sign (#) as the first character of their names; they are visible only to the current connection for the user, and they are deleted when the user disconnects from the instance of SQL Server. Global temporary tables have two number signs (##) as the first characters of their names; they are visible to any user after they are created, and they are deleted when all users referencing the table disconnect from the instance of SQL Server." -- from here
so the answer to your problem is put ## instead of # to make the local temporary table to global.

Invalid Pattern name; Oracle Stored Procedure Input Parameter

I have a JDBC program to which uses a CallabaleStatement object to set and register IN/OUT parameters for the stored procedure.
I have used ArrayDescriptors and oracle.sql.ARRAY object and set it as an input parameter with the user-defined datatype.
The User defined datatype is TYPE CharArray1 IS TABLE OF CHAR(7). During the executing the error, I receive is "invalid pattern name my-object".
I set the input in the following way:
ArrayDescriptor ad = ArrayDescriptor.createDescriptor("<package-name>.CharArray1", conn);
ARRAY arr = new ARRAY(ad, conn, new String[]{"1"});
callableStatement.setArray(3, arr );
where conn is my Connection Object. I have checked the Execute permission for the package-name for the Datatype CharArray1. I have removed the package-name and checked, the error remain the same.
Thanks in advance. Please advise me as to what I'm doing wrong here.
Try to switch CHAR to VARCHAR2(7 char)

Dapper: Procedure or function has too many arguments specified

While using Dapper to call a stored procedure, I'm receiving the following error:
Procedure or function has too many arguments specified
I'm using DynamicParameters to add a list of simple parameters to the query.
The parameter code looks like this:
var parameters = new DynamicParameters();
parameters.Add(p.Name, p.Value, direction: p.Mode);
The query code looks like this:
var result = _connection.Query<T>(
string.Format("{0}.{1}", request.SchemaName, request.StoredProcedureName),
parameters,
commandType: CommandType.StoredProcedure,
transaction: _transaction);
The executing sql in the profiler shows as following:
exec dbo.storedProcedureName #ParameterNames1=N'ParameterName',#ParameterNames2=N'ParameterName',#RemoveUnused=1
#ParameterNames1 is not at all how the parameter is called. Actually, the names are being passed in as the values (N'ParameterName'). The #RemoveUnused parameter seems completely random to me, as it does not occur in the calling code at all.
The full code for this can be found here: GitHub project at lines 61 and 228.
Edit: I've found that the issue is caused by calling the same procedure twice, with different result sets. So the first time I'm calling it with Query, the second time with Query. Why Dapper is having trouble with this scenario is still a mystery.
I recently came across this issue and this appears to be caused by the following:
Your stored procedure can return multiple datasets (maybe based on a
condition parameter).
You are calling the stored procedure using
Query<T>() instead of QueryMultiple() and then mapping datasets
via Read<T>.
We recently upgraded from an old version of Dapper to v1.4 in order to support Table Variable Parameters and we started experiencing this behaviour as a direct result of the upgrade.
Solution:
Convert your Query<T> based code to a QueryMultiple implementation.
I simply can't reproduce this:
public void SO25069578_DynamicParams_Procs()
{
var parameters = new DynamicParameters();
parameters.Add("foo", "bar");
try { connection.Execute("drop proc SO25069578"); } catch { }
connection.Execute("create proc SO25069578 #foo nvarchar(max) as select #foo as [X]");
var tran = connection.BeginTransaction(); // gist used transaction; behaves the same either way, though
var row = connection.Query<HazX>("SO25069578", parameters,
commandType: CommandType.StoredProcedure, transaction: tran).Single();
tran.Rollback();
row.X.IsEqualTo("bar");
}
public class HazX
{
public string X { get; set; }
}
works fine. There is a RemoveUnused property on DynamicParameters, bit: when using dynamic parameters, that shouldn't be added. I've even tried using the template based constructor:
parameters = new DynamicParameters(parameters);
but again: this works fine. Is it possible that you're using a really, really old version of dapper? What version are you using?
I realize this is an old thread. I'm however using the latest version of the Nuget package (1.60.6) and encountered this problem recently.
To reproduce this, you'll need a Stored Procedure that based on an input parameter could return 1 or 2 (more than 1) resultset. In the code, I use 2 different extension methods to call it, too (QueryMultipleAsync sets the parameter to 1 or true and QueryAsync which sets it to 0 or false). If your test ends up calling the SP to return multiple resultset first, subsequent calls that need 1 resultset will fail with this error.
The only way I managed to solve this was to break down the SP into 2 so they have different names.
For reference, here is how I call the SP:
var data = await sqlConnection.QueryAsync<T>(
StoredProcedureName,
parms,
transaction: null,
commandTimeout: null,
commandType: CommandType.StoredProcedure)
and
var data = await sqlConnection.QueryMultipleAsync(
StoredProcedureName,
param: p,
commandType: CommandType.StoredProcedure)
.Map<Type1, Type2, long>
(
o1 => o1.Id,
o2 => o2.FkId ?? 0,
(o1, o2) => { o1.Children = o2.ToList(); }
);
This Dapper issue is caused by the Read method for reading datasets after a QueryMultiple.
In this case Dapper caches the parameters and if you call the same stored procedure with the same parameters using a Dapper Query method, it will fail.
To solve the problem, change the call to QueryMultiple method from like this:
var reader = conn.QueryMultiple (spName, pars, commandType: CommandType.StoredProcedure);
to this:
var cmd = new CommandDefinition (spName, pars, commandType: CommandType.StoredProcedure, flags: CommandFlags.NoCache);
var reader = conn.QueryMultiple (cmd);
Recently hit this problem caused by calling the same procedure twice using different Dapper methods.
The first call to the same SQL stored procedure was via .QueryMultiple. calling the same procedure with parameters again using .QuerySingleOrDefault resulted in the parameters being ParameterNames1 and RemoveUnused mentioned in the origninal question.

How to call Sybase stored procedure with named params using JDBC

I have a stored procedure in Sybase which I can invoke from my favourite SQL client like this:
exec getFooBar #Foo='FOO', #Bar='BAR'
It returns a table of results, so its like a query. It has actually many parameters, but I only want to call it with Foo and sometimes with Foo and Bar specified.
Sybase is ASE 15.0, I am using jConnect 6.0.5.
I can invoke this using a PreparedCall if I specify only the first parameter:
CallableStatement cs = conn.prepareCall("{call getFooBar(?) }");
cs.setString(1,"FOO");
However I can't use the #Foo notation to pass my params:
conn.prepareCall("{call getFooBar #Foo='FOO' }");
conn.prepareCall("call getFooBar #Foo='FOO' ");
conn.prepareCall("{call getFooBar #Foo=? }"); // + setString(1,'FOO')
In all these cases I get exception from the DB telling me that the Foo parameter is expected:
com.sybase.jdbc3.jdbc.SybSQLException: Procedure getFooBar expects
parameter #Foo, which was not supplied
Ideally I'd like to do this with Spring JdbcTemplate or SimpleJdbcCall, but with those I could not even get to the point where I could with plain old JDBC.
So to summarize, I'd like to avoid ending up with:
{ call getFooBar(?,null,null,null,null,null,?,null,null) }
Is that possible using JDBC or better Spring?
Not the most efficient solution, but, have you tried using the plain Statement itself with EXEC.
eg.
String mySql = "EXEC getFooBar #Foo='FOO', #Bar='BAR'";
ResultSet rs = conn.getConnection().createStatement().executeQuery(mySql);

How do you retrieve the return value of a DB2 SQL sproc using Perl DBI?

I need to retrieve the value returned by a DB2 sproc that I have written. The sproc returns the number of rows in a table and is used by the calling process to decide whether or not to update other data.
I have looked at several similar questions on SO but they refer to the use of out parameters instead of using the sproc's return value, for example:
Perl Dbi and stored procedures
I am using a standard DBI connection to the database with both RaiseError and PrintError enabled.
$sql_stmt = "call MY_TABLE_SPACE.MY_SPROC('2011-10-31')";
$sth = $dbh->prepare($sql_stmt)
or die "Unable to prepare SQL '$sql_stmt': $rps_met_dbh->errstr";
$rsp = 0;
$rsp = $sth->execute();
unless($rsp) {
print(STDERR "Unable to execute sproc: $rps_met_dbh->errstr\n");
}
print(STDERR "$?\n");
I have tried looking at $h->err for both the statement handle and the db handle.
I would really prefer communicating the number of rows via a return code rather than using SQLSTATE mechanism if I can.
Edit:
I have finished up using a dedicated out parameter to communicate the number of rows updated as follows:
$sql_stmt = "call MY_TABLE_SPACE.MY_SPROC('2011-10-31')";
$sth = $dbh->prepare($sql_stmt)
or die "Unable to prepare SQL '$sql_stmt': $rps_met_dbh->errstr";
$sth = $dbh->bind_param_inout(1, $rows_updated, 128)
or die "Unable to prepare SQL '$sql_stmt': $rps_met_dbh->errstr";
$rows_updated = 0;
$rsp = 0;
$rsp = $sth->execute();
unless($rsp) {
print(STDERR "Unable to execute sproc: $rps_met_dbh->errstr\n");
}
print(STDERR "$rows_updated\n");
Edit 2:
And now thinking about this further I have realised that I should apply the PragProg principle of "Tell. Don't Ask." That is, I shouldn't call the sproc. then have it give me back a number before I decide whether or not to call the anopther sproc, i.e. "Ask".
I should just call the first sproc. and have it decide whether it should call the other sproc or not, i.e. "Tell" and let it decide.
What is wrong with using an output parameter in your procedure. I've not got a working DB2 lying around right now or I'd provide an example but when I was using it I'm sure you can define output parameters in procedures and bind them with bind_param_inout. I cannot remember if a DB2 procedure can return a value (like a function) but if it can them using "? = call MY_TABLE_SPACE.MY_SPROC('2011-10-31')" would allow you to bind the output return value. If this doesn't work you could use a DB2 function which definitely can return a value. However, at the end of the day the way you get data out of a procedure/function is to bind output parameters - that is just the way it is.
I've no idea what you mean by "using SQLSTATE". I've also no idea what you mean by looking at $h->err as that is only set if the procedure fails or you cannot call the procedure (SQL error etc).