Dapper: Procedure or function has too many arguments specified - sql

While using Dapper to call a stored procedure, I'm receiving the following error:
Procedure or function has too many arguments specified
I'm using DynamicParameters to add a list of simple parameters to the query.
The parameter code looks like this:
var parameters = new DynamicParameters();
parameters.Add(p.Name, p.Value, direction: p.Mode);
The query code looks like this:
var result = _connection.Query<T>(
string.Format("{0}.{1}", request.SchemaName, request.StoredProcedureName),
parameters,
commandType: CommandType.StoredProcedure,
transaction: _transaction);
The executing sql in the profiler shows as following:
exec dbo.storedProcedureName #ParameterNames1=N'ParameterName',#ParameterNames2=N'ParameterName',#RemoveUnused=1
#ParameterNames1 is not at all how the parameter is called. Actually, the names are being passed in as the values (N'ParameterName'). The #RemoveUnused parameter seems completely random to me, as it does not occur in the calling code at all.
The full code for this can be found here: GitHub project at lines 61 and 228.
Edit: I've found that the issue is caused by calling the same procedure twice, with different result sets. So the first time I'm calling it with Query, the second time with Query. Why Dapper is having trouble with this scenario is still a mystery.

I recently came across this issue and this appears to be caused by the following:
Your stored procedure can return multiple datasets (maybe based on a
condition parameter).
You are calling the stored procedure using
Query<T>() instead of QueryMultiple() and then mapping datasets
via Read<T>.
We recently upgraded from an old version of Dapper to v1.4 in order to support Table Variable Parameters and we started experiencing this behaviour as a direct result of the upgrade.
Solution:
Convert your Query<T> based code to a QueryMultiple implementation.

I simply can't reproduce this:
public void SO25069578_DynamicParams_Procs()
{
var parameters = new DynamicParameters();
parameters.Add("foo", "bar");
try { connection.Execute("drop proc SO25069578"); } catch { }
connection.Execute("create proc SO25069578 #foo nvarchar(max) as select #foo as [X]");
var tran = connection.BeginTransaction(); // gist used transaction; behaves the same either way, though
var row = connection.Query<HazX>("SO25069578", parameters,
commandType: CommandType.StoredProcedure, transaction: tran).Single();
tran.Rollback();
row.X.IsEqualTo("bar");
}
public class HazX
{
public string X { get; set; }
}
works fine. There is a RemoveUnused property on DynamicParameters, bit: when using dynamic parameters, that shouldn't be added. I've even tried using the template based constructor:
parameters = new DynamicParameters(parameters);
but again: this works fine. Is it possible that you're using a really, really old version of dapper? What version are you using?

I realize this is an old thread. I'm however using the latest version of the Nuget package (1.60.6) and encountered this problem recently.
To reproduce this, you'll need a Stored Procedure that based on an input parameter could return 1 or 2 (more than 1) resultset. In the code, I use 2 different extension methods to call it, too (QueryMultipleAsync sets the parameter to 1 or true and QueryAsync which sets it to 0 or false). If your test ends up calling the SP to return multiple resultset first, subsequent calls that need 1 resultset will fail with this error.
The only way I managed to solve this was to break down the SP into 2 so they have different names.
For reference, here is how I call the SP:
var data = await sqlConnection.QueryAsync<T>(
StoredProcedureName,
parms,
transaction: null,
commandTimeout: null,
commandType: CommandType.StoredProcedure)
and
var data = await sqlConnection.QueryMultipleAsync(
StoredProcedureName,
param: p,
commandType: CommandType.StoredProcedure)
.Map<Type1, Type2, long>
(
o1 => o1.Id,
o2 => o2.FkId ?? 0,
(o1, o2) => { o1.Children = o2.ToList(); }
);

This Dapper issue is caused by the Read method for reading datasets after a QueryMultiple.
In this case Dapper caches the parameters and if you call the same stored procedure with the same parameters using a Dapper Query method, it will fail.
To solve the problem, change the call to QueryMultiple method from like this:
var reader = conn.QueryMultiple (spName, pars, commandType: CommandType.StoredProcedure);
to this:
var cmd = new CommandDefinition (spName, pars, commandType: CommandType.StoredProcedure, flags: CommandFlags.NoCache);
var reader = conn.QueryMultiple (cmd);

Recently hit this problem caused by calling the same procedure twice using different Dapper methods.
The first call to the same SQL stored procedure was via .QueryMultiple. calling the same procedure with parameters again using .QuerySingleOrDefault resulted in the parameters being ParameterNames1 and RemoveUnused mentioned in the origninal question.

Related

Major performance difference between Entity Framework generated sp_executesql and direct query in SSMS

I'm using Entity Framework for making a rather large query. Recently this query is failing due to timeout exceptions.
When I started investigating this issue I used LinqPad and directly copied the SQL output in SSMS and ran the query. This query returns within 1 second!
The query then looks like (only for illustration, the real query is much larger)
DECLARE #p__linq__0 DateTime2 = '2017-10-01 00:00:00.0000000'
DECLARE #p__linq__1 DateTime2 = '2017-10-31 00:00:00.0000000'
SELECT
[Project8].[Volgnummer] AS [Volgnummer],
[Project8].[FkKlant] AS [FkKlant],
-- rest omitted for brevity
Now I used SQL Profiler to capture the real SQL send to the server. The query is exactly the same with the difference that this query is encapsulated within a call to sp_executesql. Like this:
exec sp_executesql N'SELECT
[Project8].[Volgnummer] AS [Volgnummer],
[Project8].[FkKlant] AS [FkKlant],
-- rest omitted for brevity
',N'#p__linq__0 datetime2(7),#p__linq__1 datetime2(7)',
#p__linq__0='2017-10-01 00:00:00',#p__linq__1='2017-10-31 00:00:00'
When I copy/paste this query in SSMS it runs for 60 seconds and thus results in a timeout when using from EF with default settings!
I can't wrap my head around why this difference is occurring, as this is the same query, the only thing is, it is executed differently.
I read a lot about why EF uses sp_executesql and I understand why. I also read that sp_executesql is different from EXEC because it makes use of the queryplan cache, but I don't understand why the SQL optimizer has such difficulty in creating a performant query plan for the sp_executesql version whereas it is capable of creating a performant queryplan for the direct query version.
I'm not sure if the complete query itself adds to the question. If it does, let me know and I will make an edit.
Thanks to the supplied comments I managed two things:
I now understand the query plan and the differences between parameter sniffing and variables in queries
I implemented a DbCommandInterceptor to add OPTION (OPTIMIZE FOR UNKNOWN) to the query when needed.
The SQL query compiled by Entity Framework can be intercepted before send to the server by adding an implementation to DbInterception.
Such an implementation is trivial to make:
public class QueryHintInterceptor : DbCommandInterceptor
{
public override void ReaderExecuting(DbCommand command,
DbCommandInterceptionContext<DbDataReader> interceptionContext)
{
queryHint = " OPTION (OPTIMIZE FOR UNKNOWN)";
if (!command.CommandText.EndsWith(queryHint))
{
command.CommandText += queryHint;
}
base.ReaderExecuting(command, interceptionContext);
}
}
// Add to the interception proces:
DbInterception.Add(new QueryHintsInterceptor());
As Entity Framework also caches the queries, I check if an optimization already has been added.
But this approach will intercept all queries and obviously one should not do this. As the DbCommandInterceptionContext gives access to the DbContext I added an interface with a single property (ISupportQueryHints) to my DbContext which I set to a optimization when the query needs this.
This now looks like this:
public class QueryHintInterceptor : DbCommandInterceptor
{
public override void ReaderExecuting(DbCommand command,
DbCommandInterceptionContext<DbDataReader> interceptionContext)
{
var dbContext =
interceptionContext.DbContexts.FirstOrDefault(d => d is ISupportQueryHints) as ISupportQueryHints;
if (dbContext != null)
{
var queryHint = $" OPTION ({dbContext.QueryHint})";
if (!command.CommandText.EndsWith(queryHint))
{
command.CommandText += queryHint;
}
}
base.ReaderExecuting(command, interceptionContext);
}
}
Where needed this can be used as:
public IEnumerable<SomeDto> QuerySomeDto()
{
using (var dbContext = new MyQuerySupportingDbContext())
{
dbContext.QueryHint = "OPTIMIZE FOR UNKNOWN";
return this.PerformQuery(dbContext);
}
}
Because my application makes use of a message based architecture surrounding commands and queries as described here my implementation consists of a decorator around the queryhandlers in need of optimization. This decorator sets the query hints to the DbContext whenever needed. This is however an implementation detail. The basic idea stays the same.
I updated #Ric.Net's QueryHintInterceptor class to handle the case where multiple contexts are being used for a query and may have their own hints:
public class QueryHintInterceptor : DbCommandInterceptor
{
public override void ReaderExecuting(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext)
{
var contextHints = interceptionContext.DbContexts
.Select(c => (c as ISupportQueryHints)?.QueryHint)
.Where(h => !string.IsNullOrEmpty(h))
.Distinct()
.ToList();
var queryHint = $"{System.Environment.NewLine}OPTION ({ string.Join(", ", contextHints) })";
if (contextHints.Any() && !command.CommandText.EndsWith(queryHint))
{
command.CommandText += queryHint;
}
base.ReaderExecuting(command, interceptionContext);
}
}
Although honestly, if you're at that point, you might consider building a more robust solution like the one described here.

SQL Server connection context using temporary table cannot be used in stored procedures called with SqlDataAdapter.Fill

I want to have some information available for any stored procedure, such as current user. Following the temporary table method indicated here, I have tried the following:
1) create temporary table when connection is opened
private void setConnectionContextInfo(SqlConnection connection)
{
if (!AllowInsertConnectionContextInfo)
return;
var username = HttpContext.Current?.User?.Identity?.Name ?? "";
var commandBuilder = new StringBuilder($#"
CREATE TABLE #ConnectionContextInfo(
AttributeName VARCHAR(64) PRIMARY KEY,
AttributeValue VARCHAR(1024)
);
INSERT INTO #ConnectionContextInfo VALUES('Username', #Username);
");
using (var command = connection.CreateCommand())
{
command.Parameters.AddWithValue("Username", username);
command.ExecuteNonQuery();
}
}
/// <summary>
/// checks if current connection exists / is closed and creates / opens it if necessary
/// also takes care of the special authentication required by V3 by building a windows impersonation context
/// </summary>
public override void EnsureConnection()
{
try
{
lock (connectionLock)
{
if (Connection == null)
{
Connection = new SqlConnection(ConnectionString);
Connection.Open();
setConnectionContextInfo(Connection);
}
if (Connection.State == ConnectionState.Closed)
{
Connection.Open();
setConnectionContextInfo(Connection);
}
}
}
catch (Exception ex)
{
if (Connection != null && Connection.State != ConnectionState.Open)
Connection.Close();
throw new ApplicationException("Could not open SQL Server Connection.", ex);
}
}
2) Tested with a procedure which is used to populate a DataTable using SqlDataAdapter.Fill, by using the following function:
public DataTable GetDataTable(String proc, Dictionary<String, object> parameters, CommandType commandType)
{
EnsureConnection();
using (var command = Connection.CreateCommand())
{
if (Transaction != null)
command.Transaction = Transaction;
SqlDataAdapter adapter = new SqlDataAdapter(proc, Connection);
adapter.SelectCommand.CommandTimeout = CommonConstants.DataAccess.DefaultCommandTimeout;
adapter.SelectCommand.CommandType = commandType;
if (Transaction != null)
adapter.SelectCommand.Transaction = Transaction;
ConstructCommandParameters(adapter.SelectCommand, parameters);
DataTable dt = new DataTable();
try
{
adapter.Fill(dt);
return dt;
}
catch (SqlException ex)
{
var err = String.Format("Error executing stored procedure '{0}' - {1}.", proc, ex.Message);
throw new TptDataAccessException(err, ex);
}
}
}
3) called procedure tries to get the username like this:
DECLARE #username VARCHAR(128) = (select AttributeValue FROM #ConnectionContextInfo where AttributeName = 'Username')
but #ConnectionContextInfo is no longer available in the context.
I have put a SQL profiler against the database, to check what is happening:
temporary table is created successfully using a certain SPID
procedure is called using the same SPID
Why is temporary table not available within the procedure scope?
In T-SQL doing the following works:
create a temporary table
call a procedure that needs data from that particular temporary table
temporary table is dropped only explicitly or after current scope ends
Thanks.
As was shown in this answer, ExecuteNonQuery uses sp_executesql when CommandType is CommandType.Text and command has parameters.
The C# code in this question doesn't set the CommandType explicitly and it is Text by default, so most likely end result of the code is that CREATE TABLE #ConnectionContextInfo is wrapped into sp_executesql. You can verify it in the SQL Profiler.
It is well-known that sp_executesql is running in its own scope (essentially it is a nested stored procedure). Search for "sp_executesql temp table". Here is one example: Execute sp_executeSql for select...into #table but Can't Select out Temp Table Data
So, a temp table #ConnectionContextInfo is created in the nested scope of sp_executesql and is automatically deleted as soon as sp_executesql returns.
The following query that is run by adapter.Fill doesn't see this temp table.
What to do?
Make sure that CREATE TABLE #ConnectionContextInfo statement is not wrapped into sp_executesql.
In your case you can try to split a single batch that contains both CREATE TABLE #ConnectionContextInfo and INSERT INTO #ConnectionContextInfo into two batches. The first batch/query would contain only CREATE TABLE statement without any parameters. The second batch/query would contain INSERT INTO statement with parameter(s).
I'm not sure it would help, but worth a try.
If that doesn't work you can build one T-SQL batch that creates a temp table, inserts data into it and calls your stored procedure. All in one SqlCommand, all in one batch. This whole SQL will be wrapped in sp_executesql, but it would not matter, because the scope in which temp table is created will be the same scope in which stored procedure is called. Technically it will work, but I wouldn't recommend following this path.
Here is not an answer to the question, but suggestion to solve the problem.
To be honest, the whole approach looks strange. If you want to pass some data into the stored procedure why not use parameters of this stored procedure. This is what they are for - to pass data into the procedure. There is no real need to use temp table for that. You can use a table-valued parameter (T-SQL, .NET) if the data that you are passing is complex. It is definitely an overkill if it is simply a Username.
Your stored procedure needs to be aware of the temp table, it needs to know its name and structure, so I don't understand what's the problem with having an explicit table-valued parameter instead. Even the code of your procedure would not change much. You'd use #ConnectionContextInfo instead of #ConnectionContextInfo.
Using temp tables for what you described makes sense to me only if you are using SQL Server 2005 or earlier, which doesn't have table-valued parameters. They were added in SQL Server 2008.
MINOR ISSUE: I am going to assume for the moment that the code posted in the Question isn't the full piece of code that is running. Not only are there variables used that we don't see getting declared (e.g. AllowInsertConnectionContextInfo), but there is a glaring omission in the setConnectionContextInfo method: the command object is created but never is its CommandText property set to commandBuilder.ToString(), hence it appears to be an empty SQL batch. I'm sure that this is actually being handled correctly since 1) I believe submitting an empty batch will generate an exception, and 2) the question does mention that the temp table creation appears in the SQL Profiler output. Still, I am pointing this out as it implies that there could be additional code that is relevant to the observed behavior that is not shown in the question, making it more difficult to give a precise answer.
THAT BEING SAID, as mentioned in #Vladimir's fine answer, due to the query running in a sub-process (i.e. sp_executesql), local temporary objects -- tables and stored procedures -- do not survive the completion of that sub-process and hence are not available in the parent context.
Global temporary objects and permanent/non-temporary objects will survive the completion of the sub-process, but both of those options, in their typical usage, introduce concurrency issues: you would need to test for the existence first before attempting to create the table, and you would need a way to distinguish one process from another. So these are not really a great option, at least not in their typical usage (more on that later).
Assuming that you cannot pass in any values into the Stored Procedure (else you could simply pass in the username as #Vladimir suggested in his answer), you have a few options:
The easiest solution, given the current code, would be to separate the creation of the local temporary table from the INSERT command (also mentioned in #Vladimir's answer). As previously mentioned, the issue you are encountering is due to the query running within sp_executesql. And the reason sp_executesql is being used is to handle the parameter #Username. So, the fix could be as simple as changing the current code to be the following:
string _Command = #"
CREATE TABLE #ConnectionContextInfo(
AttributeName VARCHAR(64) PRIMARY KEY,
AttributeValue VARCHAR(1024)
);";
using (var command = connection.CreateCommand())
{
command.CommandText = _Command;
command.ExecuteNonQuery();
}
_Command = #"
INSERT INTO #ConnectionContextInfo VALUES ('Username', #Username);
");
using (var command = connection.CreateCommand())
{
command.CommandText = _Command;
// do not use AddWithValue()!
SqlParameter _UserName = new SqlParameter("#Username", SqlDbType.NVarChar, 128);
_UserName.Value = username;
command.Parameters.Add(_UserName);
command.ExecuteNonQuery();
}
Please note that temporary objects -- local and global -- cannot be accessed in T-SQL User-Defined Functions or Table-Valued Functions.
A better solution (most likely) would be to use CONTEXT_INFO, which is essentially session memory. It is a VARBINARY(128) value but changes to it survive any sub-process since it is not an object. Not only does this remove the current complication you are facing, but it also reduces tempdb I/O considering that you are creating and dropping a temporary table each time this process runs, and doing an INSERT, and all 3 of those operations are written to disk twice: first in the Transaction Log, then in the data file. You can use this in the following manner:
string _Command = #"
DECLARE #User VARBINARY(128) = CONVERT(VARBINARY(128), #Username);
SET CONTEXT_INFO #User;
";
using (var command = connection.CreateCommand())
{
command.CommandText = _Command;
// do not use AddWithValue()!
SqlParameter _UserName = new SqlParameter("#Username", SqlDbType.NVarChar, 128);
_UserName.Value = username;
command.Parameters.Add(_UserName);
command.ExecuteNonQuery();
}
And then you get the value within the Stored Procedure / User-Defined Function / Table-Valued Function / Trigger via:
DECLARE #Username NVARCHAR(128) = CONVERT(NVARCHAR(128), CONTEXT_INFO());
That works just fine for a single value, but if you need multiple values, or if you are already using CONTEXT_INFO for another purpose, then you either need to go back to one of the other methods described here, OR, if using SQL Server 2016 (or newer), you can use SESSION_CONTEXT, which is similar to CONTEXT_INFO but is a HashTable / Key-Value pairs.
Another benefit of this approach is that CONTEXT_INFO (at least, I haven't yet tried SESSION_CONTEXT) is available in T-SQL User-Defined Functions and Table-Valued Functions.
Finally, another option would be to create a global temporary table. As mentioned above, global objects have the benefit of surviving sub-processes, but they also have the drawback of complicating concurrency. A seldom-used to get the benefit without the drawback is to give the temporary object a unique, session-based name, rather than add a column to hold a unique, session-based value. Using a name that is unique to the session removes any concurrency issues while allowing you to use an object that will get automatically cleaned up when the connection is closed (so no need to worry about a process that creates a global temporary table and then runs into an error before completing, whereas using a permanent table would require cleanup, or at least an existence check at the beginning).
Keeping in mind the restriction that we cannot pass any value into the Stored Procedure, we need to use a value that already exists at the data layer. The value to use would be the session_id / SPID. Of course, this value does not exist in the app layer, so it has to be retreived, but there was no restriction placed on going in that direction.
int _SessionId;
using (var command = connection.CreateCommand())
{
command.CommandText = #"SET #SessionID = ##SPID;";
SqlParameter _paramSessionID = new SqlParameter("#SessionID", SqlDbType.Int);
_paramSessionID.Direction = ParameterDirection.Output;
command.Parameters.Add(_UserName);
command.ExecuteNonQuery();
_SessionId = (int)_paramSessionID.Value;
}
string _Command = String.Format(#"
CREATE TABLE ##ConnectionContextInfo_{0}(
AttributeName VARCHAR(64) PRIMARY KEY,
AttributeValue VARCHAR(1024)
);
INSERT INTO ##ConnectionContextInfo_{0} VALUES('Username', #Username);", _SessionId);
using (var command = connection.CreateCommand())
{
command.CommandText = _Command;
SqlParameter _UserName = new SqlParameter("#Username", SqlDbType.NVarChar, 128);
_UserName.Value = username;
command.Parameters.Add(_UserName);
command.ExecuteNonQuery();
}
And then you get the value within the Stored Procedure / Trigger via:
DECLARE #Username NVARCHAR(128),
#UsernameQuery NVARCHAR(4000);
SET #UsernameQuery = CONCAT(N'SELECT #tmpUserName = [AttributeValue]
FROM ##ConnectionContextInfo_', ##SPID, N' WHERE [AttributeName] = ''Username'';');
EXEC sp_executesql
#UsernameQuery,
N'#tmpUserName NVARCHAR(128) OUTPUT',
#Username OUTPUT;
Please note that temporary objects -- local and global -- cannot be accessed in T-SQL User-Defined Functions or Table-Valued Functions.
Finally, it is possible to use a real / permanent (i.e. non-temporary) Table, provided that you include a column to hold a value specific to the current session. This additional column will allow for concurrent operations to work properly.
You can create the table in tempdb (yes, you can use tempdb as a regular DB, doesn't need to be only temporary objects starting with # or ##). The advantages of using tempdb is that the table is out of the way of everything else (it is just temporary values, after all, and doesn't need to be restored, so tempdb using SIMPLE recovery model is perfect), and it gets cleaned up when the Instance restarts (FYI: tempdb is created brand new as a copy of model each time SQL Server starts).
Just like with Option #3 above, we can again use the session_id / SPID value since it is common to all operations on this Connection (as long as the Connection remains open). But, unlike Option #3, the app code doesn't need the SPID value: it can be inserted automatically into each row using a Default Constraint. This simplies the operation a little.
The concept here is to first check to see if the permanent table in tempdb exists. If it does, then make sure that there is no entry already in the table for the current SPID. If it doesn't, then create the table. Since it is a permanent table, it will continue to exist, even after the current process closes its Connection. Finally, insert the #Username parameter, and the SPID value will populate automatically.
// assume _Connection is already open
using (SqlCommand _Command = _Connection.CreateCommand())
{
_Command.CommandText = #"
IF (OBJECT_ID(N'tempdb.dbo.Usernames') IS NOT NULL)
BEGIN
IF (EXISTS(SELECT *
FROM [tempdb].[dbo].[Usernames]
WHERE [SessionID] = ##SPID
))
BEGIN
DELETE FROM [tempdb].[dbo].[Usernames]
WHERE [SessionID] = ##SPID;
END;
END;
ELSE
BEGIN
CREATE TABLE [tempdb].[dbo].[Usernames]
(
[SessionID] INT NOT NULL
CONSTRAINT [PK_Usernames] PRIMARY KEY
CONSTRAINT [DF_Usernames_SessionID] DEFAULT (##SPID),
[Username] NVARCHAR(128) NULL,
[InsertTime] DATETIME NOT NULL
CONSTRAINT [DF_Usernames_InsertTime] DEFAULT (GETDATE())
);
END;
INSERT INTO [tempdb].[dbo].[Usernames] ([Username]) VALUES (#UserName);
";
SqlParameter _UserName = new SqlParameter("#Username", SqlDbType.NVarChar, 128);
_UserName.Value = username;
command.Parameters.Add(_UserName);
_Command.ExecuteNonQuery();
}
And then you get the value within the Stored Procedure / User-Defined Function / Table-Valued Function / Trigger via:
SELECT [Username]
FROM [tempdb].[dbo].[Usernames]
WHERE [SessionID] = ##SPID;
Another benefit of this approach is that permanent tables are accessible in T-SQL User-Defined Functions and Table-Valued Functions.
"There are two types of temporary tables: local and global. They differ from each other in their names, their visibility, and their availability. Local temporary tables have a single number sign (#) as the first character of their names; they are visible only to the current connection for the user, and they are deleted when the user disconnects from the instance of SQL Server. Global temporary tables have two number signs (##) as the first characters of their names; they are visible to any user after they are created, and they are deleted when all users referencing the table disconnect from the instance of SQL Server." -- from here
so the answer to your problem is put ## instead of # to make the local temporary table to global.

Knowledge & Connect PHP API, Found object(Account or Answer) but contains only null fields

I'm facing some strange issues when I try to fetch(Connect PHP API)/searchContent(Knowledge Foundation API) following the tutorials/documentations.
Behaviour and output
Following the documentation, we initialize the API. The function error_get_last() (called after the fetch) states that the core read-only file (we are not allowed to modify it) contains an error:
Array ( [type] => 8 [message] => Undefined index: REDIRECT_URL [file] => /cgi-bin/${interface_name}.cfg/scripts/cp/core/framework/3.2.4/init.php [line] => 246 )
After initialization, we call the fetch function to retrieve an account. If we give a wrong ID, it returns an error:
Invalid ID: No such Account with ID = 32
Otherwise, furnishing a correct ID returns an Account object with all fields populated as NULL:
object(RightNow\Connect\v1_2\Account)#22 (25) {
["ID"]=>
NULL
["LookupName"]=>
NULL
["CreatedTime"]=>
NULL
["UpdatedTime"]=>
NULL
["AccountHierarchy"]=>
NULL
["Attributes"]=>
NULL
["Country"]=>
NULL
["CustomFields"]=>
NULL
["DisplayName"]=>
NULL
["DisplayOrder"]=>
NULL
["EmailNotification"]=>
NULL
["Emails"]=>
NULL
["Login"]=>
NULL
/* [...] */
["StaffGroup"]=>
NULL
}
Attempts, workaround and troubleshooting information
Configuration: The account used using the InitConnectAPI() has the permissions
Initialization: Call to InitConnectAPI() not throwing any exception(added a try - catch block)
Call to the fetch function: As said above, the call to RNCPHP\Account::fetch($act_id) finds the account (invalid_id => error) but doesn't manage to populate the fields
No exception is thrown on the RNCPHP::fetch($correct_id) call
The behaviour is the same when I try to retrieve an answer following a sample example from the Knowledge Foundation API : $token = \RNCK::StartInteraction(...) ; \RNCK::searchContent($token, 'lorem ipsum');
Using PHP's SoapClient, I manage to retrieve populated objects. However, It's not part of the standard and a self-call-local-WebService is not a good practice.
Code reproducing the issue
error_reporting(E_ALL);
require_once(get_cfg_var('doc_root') . '/include/ConnectPHP/Connect_init.phph');
InitConnectAPI();
use RightNow\Connect\v1_2 as RNCPHP;
/* [...] */
try
{
$fetched_acct = RNCPHP\Account::fetch($correct_usr_id);
} catch ( \Exception $e)
{
echo ($e->getMessage());
}
// Dump part
echo ("<pre>");
var_dump($fetched_acct);
echo ("</pre>");
// The core's error on which I have no control
print_r(error_get_last());
Questions:
Have any of you face the same issue ? What is the workaround/fix which would help me solve it ?
According to the RNCPHP\Account::fetch($correct_usr_id) function behaviour, we can surmise that the issue comes from the 'fields populating' step which might be part of the core (on which I have no power). How am I supposed to deal with this (fetch is static and account doesn't seem abstract) ?
I tried to use the debug_backtrace() function in order to have some visibility on what may go wrong but it doesn't output relevant information. Is there any way I can get more debug information ?
Thanks in advance,
Oracle Service Cloud uses lazy loading to populate the object variables from queried data using Connect for PHP APIs. When you output the result of an object, it will appear as each variable is empty, per your example. However, if you access the parameter, then it becomes available. This is only an issue when you try to print your object, like this example. Accessing the data should be immediate.
To print your object, like in your example, you would need to iterate through the object variables and access each one first. You could build a helper class to do that through reflection. But, to illustrate with a single field, do the following:
$acct = RNCPHP\Account::fetch($correctId);
$acct->ID;
print_r($acct); // Will now "show" ID, but none of the other fields have been loaded.
In the real world, you probably just want to operate on the data. So, even though you cannot "see" the data in the object, it's there. In the example below, we're accessing the updated time of the account and then performing an action on the object if it meets a condition.
//Set to disabled if last updated < 90 days ago
$acct = RNCPHP\Account::fetch($correctId);
$chkDate = time() - 7776000;
if($acct->UpdatedTime < $chkDate){
$acct->Attributes->PermanentlyDisabled = true;
$acct->save(RNCPHP\RNObject::SuppressAll);
}
If you were to print_r the object after the if condition, then you would see the UpdatedTime variable data because it was loaded at the condition check.

How to call Sybase stored procedure with named params using JDBC

I have a stored procedure in Sybase which I can invoke from my favourite SQL client like this:
exec getFooBar #Foo='FOO', #Bar='BAR'
It returns a table of results, so its like a query. It has actually many parameters, but I only want to call it with Foo and sometimes with Foo and Bar specified.
Sybase is ASE 15.0, I am using jConnect 6.0.5.
I can invoke this using a PreparedCall if I specify only the first parameter:
CallableStatement cs = conn.prepareCall("{call getFooBar(?) }");
cs.setString(1,"FOO");
However I can't use the #Foo notation to pass my params:
conn.prepareCall("{call getFooBar #Foo='FOO' }");
conn.prepareCall("call getFooBar #Foo='FOO' ");
conn.prepareCall("{call getFooBar #Foo=? }"); // + setString(1,'FOO')
In all these cases I get exception from the DB telling me that the Foo parameter is expected:
com.sybase.jdbc3.jdbc.SybSQLException: Procedure getFooBar expects
parameter #Foo, which was not supplied
Ideally I'd like to do this with Spring JdbcTemplate or SimpleJdbcCall, but with those I could not even get to the point where I could with plain old JDBC.
So to summarize, I'd like to avoid ending up with:
{ call getFooBar(?,null,null,null,null,null,?,null,null) }
Is that possible using JDBC or better Spring?
Not the most efficient solution, but, have you tried using the plain Statement itself with EXEC.
eg.
String mySql = "EXEC getFooBar #Foo='FOO', #Bar='BAR'";
ResultSet rs = conn.getConnection().createStatement().executeQuery(mySql);

See the SQL commands generated by EntityFramework: Cast exception

Based on this and this, I'm doing the following to get the SQL enerated by Entity Framework 5.0
var query = from s in db.ClassesDetails
where s.ClassSet == "SetOne"
orderby s.ClassNum
select s.ClassNum;
var objectQuery = (System.Data.Objects.ObjectQuery)query; // <= problem!
var sql = objectQuery.ToTraceString();
However on the second line I get the following exception:
Unable to cast object of type 'System.Data.Entity.Infrastructure.DbQuery`1[System.Int16]' to type 'System.Data.Objects.ObjectQuery'.
Did something change since those SO answers were posted? What do I need to do to get the queries as strings? We're running against Azure SQL so can't run the usual SQL profiler tools :(
ObjectQuery is created when you are using ObjectContext. When you are using DbContext it uses and creates DbQuery. Also, note that this is actually not a DbQuery but DbQuery<T>. I believe that to display SQL when having DbQueries you can just do .ToString() on the DbQuery instance so no cast should be required. Note that parameter values will not be displayed though. Parameter values were added to the output very recently in EF6 - if you need this you can try the latest nightly build from http://entityframework.codeplex.com