SQL INSERT sp_cursor Error - sql

I have a pair of linked SQL servers: ServerA and ServerB. I want to write a simple INSERT INTO SELECT statement which will copy a row from ServerA's database to ServerB's database. ServerB's database was copied directly from ServerA's, and so they should have the exact same basic structure (same column names, etc.)
The problem is that when I try to execute the following statement:
INSERT INTO [ServerB].[data_collection].[dbo].[table1]
SELECT * FROM [ServerA].[data_collection].[dbo].[table1]
I get the following error:
Msg 16902, Level 16, State 48, Line 1
sp_cursor: The value of the parameter 'value' is invalid.
On the other hand, if I try to execute the following statement:
INSERT INTO [ServerB].[data_collection].[dbo].[table1] (Time)
SELECT Time FROM [ServerA].[data_collection].[dbo].[table1]
The statement works just fine, and the code is executed as expected. The above statement executes just fine, regardless of which or how many tables I specify to insert.
So my question here is why would my INSERT INTO SELECT statement function properly when I explicitly specify which columns to copy, but not when I tell it to copy everything using "*"? My second question would then be: how do I fix the problem?

Googling around to follow up on my initial hunch, I found a source I consider reliable enough to cite in an answer.
The 'value' parameter specified isn't one of your columns, it is the optional argument to sp_cursor that is called implicitly via your INSERT INTO...SELECT.
From SQL Server Central...
I have an ssis package that needs to populate a sql table with data
from a pipe-delimited text file containing 992 (!) columns per record.
...Initially I'd set up the package to contain a data flow task to use
an ole db destination control where the access mode was set to Table
or view mode. For some reason though, when running the package it
would crash, with an error stating the parameter 'value' was not valid
in the sp_cursor procedure. On setting up a trace in profiler to see
what this control actually does it appears it tries to insert the
records using the sp_cursor procedure. Running the same query in SQL
Server Management Studio gives the same result. After much testing and
pulling of hair out, I've found that by replacing the sp_cursor
statement with an insert statement the record populated fine which
suggests that sp_cursor cannot cope when more than a certain number
of parameters are attempted. Not sure of the figure.
Note the common theme here between your situation and the one cited - a bazillion columns.
That same source offers a workaround as well.
I've managed to get round this problem however by setting the access
mode to be "Table or view - fast load". Viewing the trace again
confirms that SSIS attempts this via a "insert bulk" statement which
loads fine.

Related

Must force MS SQL Server (2012) to return to it's "virgin" state when re-running an edited T-SQL script

Very often extremely trival edits cause my T-SQL scripts to fail when rerun from within a MS SQL Server 2012 edit window (e.g. "SqlQuery1.txt"). Frustratingly, there's no pattern to what (edits) cause this problem. Coping with this has forced me to jump through some wierd hoops.
An example: I changed exactly 1 character in a working query (from "Set #THisColumn = 1" to "Set #ThisColumn = 1"; the "H" was changed to "h'" to match the variable declaration). When the script was rerun MS SQL Server 2012 gave me this error:
Msg 213, Level 16, State 1, Line 54
Column name or number of supplied values does not match table definition.
My research shows this message gets thrown when there is a problem with a TABLE which should be impossible in the above case. The error is unimportant, my problem is much more general - having to use the following unsatisfactory "workaround":*
Copy the (edited) script into a new edit pane ("SQLQuery2.txt"). It works - until the next time it's edited. Which then forces me to use another edit pane ("SQLQuery3.txt"). Repeat ad nausum.
Having to do this supports the theory that the problem is somehow related to how the edit pane works - NOT the script. (Hence the title of this question)
Using this "workaround" destroys my train of thought while the resulting large number of open "scratchpad" windows causes me to lose track of what I was doing ("... lets see now, is it version 13, 17, 26 or 28 that is the last "known good" version?...).
My suspicion is that SQL considers every subsequent rerun as being a part (a continuation) of the FIRST invocation of that script. So it tries to be "helpful" (not!) by "optimizing" the query.
In a development environment this "assistance" is very premature - and exactly what I do NOT want to have happen. (First make it work...then optimize it.) How do I supress this undesirable behavoir?
From my research I know that my scripts must have lines like this one before creating a temporary table:
IF OBJECT_ID( 'tempdb..#XYZ) IS NOT NULL DROP TABLE #XYZ
and for a temporary procedure:
IF OBJECT_ID( 'tempdb..#ABCD) IS NOT NULL DROP PROCEDURE #ABCD
GO -- Required before defining any procedure
CREATE PROCEDURE #ABCD
The need to do this implies that my supposition may be correct (why else would you need to do it?).
What else has to be done so that every time I press the "execute" button I get a "clean restart" of SQL?
Other factors to keep in mind:
The script, invoked from Python 3.x, is periodically (and
frequently) run as batch/cron job (i.e. an automatically scheduled
task). This means that any form of manual intervention (e.g. using
tools like MS 2012 Management Studio etc.) is not an option.
Stored procedures aren't allowed, instead the python application reads a file of SQL commands that get passed to SQL for execution
(in effect emulating a user who types in those commands at a SQL
console).
Finally the script must also work for users that have the minimum
possible (e.g "guest") privileges
.

Pentaho Execute SQL Statements variable conversion to null

I am using PDI to delete and insert some data from a DB. I have the following issue. I create two variables called START_DATE and END_DATE that are used to select the data that will be deleted from my DB. I am able to get them and run my transformation with no erors in the log file, but when I checked if data was deleted, I find it didn't. I send checked my "DeleteProcedure" step, and it says "Conversion error: null". I have tried different approached to take the variables and pass them as Strings, but I haven't been able to solve this issue. It cannot be a SQL mistake as I tested it with a constant and it works.
Any ideas? I attach some pics. Thanks!
As a documentation of the Execute SQL script says:
Note: When you have an issue, that the SQL is started at the initialization phase of the transformation and not for each row, make sure to check the option "Execute for each row" (see description below).
In your case it executes during the initialization phase of the transformation that's why it gets null values instead of ones from previous step.

string or binary data truncated after server reboot

After rebooting SQL Server 2005 Standard 9.0.3233, we have been experiencing the above error in some of our stored procedures which try to insert into a table variable from a specific column of a table. The base table has the column defined as varchar(10), but the table variable has the column being inserted into defined only as varchar(3). However, the SELECT statement only returns data with 3 or less characters.
We have not changed the data or the code base in any other way, and this is only happening on our production server. If I run the same query on a test server with the same SQL Server 2005 edition installed, but an older backup, the error does not occur. The same data is returned in both queries if the INSERT is removed, or the table variable column is extended to match the base table.
What I have noticed is that the execution plan is different when the same query is run on the two servers. On the server where the query works, there is a computed scalar operation which takes the column and does an implicit conversion to varchar(3), before it is then outputted to the nested loop join operation.
On the server that returns an error, there is a hash join and table scan of the base table instead. I have already tried to rebuild indices and update statistics on all tables involved, including using fullscan, and with the same stat_stream as in the server that works, but I can't get the same plan back.
For now we have fixed the few stored procedures which were broken by modifying the size of the table variable column, but I would like to know if there is a way to get the statistics and indices back so that they produce the same plans as before, in case there is more code out there which just hasn't executed yet.
This is known behavior and probably has nothing to do with your reboot. Effectively what's happening is that the optimizer is re-ordering the logical elements of your query for performance reason, but this is resulting in the truncation-error check being done before the WHERE clause's filtering.
The recommended solution is to wrap the column expression that gets assigned to your VARCHAR(3) in a CASE.. that duplicates the length test in your WHERE clause. I know that sounds illogical, but it usually fixes the problem.

How do I pass system variable value to the SQL statement in Execute SQL task?

SSIS 2008. Very simple task. I want to retrieve a System Variable and use it in an SQL INSERT. I want to retrieve the value of System:MachineName and use it in an insert statement.
Using the statement INSERT INTO MYLOG (COL1) SELECT #[System::MachineName] gives the error Error: ..failed to parse. Must declare the scalar variable "#"
Using the statements SELECT #System::MachineName or SELECT ##[System::MachineName] gives the error 'Error Incorrect systax near '::'
I am not trying to pass a parameter to the query. I have searched for a day already but couldn't find how to do this one simple thing!
Here is one way you can do this. The following sample package was created using SSIS 2008 R2 and uses SQL Server 2008 R2 as backend.
Create a sample table in your SQLServer database named dbo.PackageData
Create an SSIS package.
On the SSIS, add an OLE DB connection manager named SQLServer to connect to your database, say to an SQL Server database.
On the Control flow tab, drag and drop an Execute SQL Task
Double-click on the Execute SQL task to bring the Execute SQL Task Editor.
On the General tab of the editor, set the Connection property to your connection manager named SQLServer.
In the property SQLStatement, enter the insert statement INSERT INTO dbo.PackageData (PackageName) VALUES (?)
On the Parameter Mapping tab, click Add button, select the Package variable that you would like to use. Change the data type accordingly. This example is going to insert the PackageName into a table, so the Data Type would be VARCHAR. Set the Parameter Name to 0, which indicates the index value of the parameter. Click OK button.
Execute the package.
You will notice a new record inserted into the table. I retained the package name as Package. That's why the table
Hope that helps.
Per my comment against #ZERO's answer (repeated here as an answer so it isn't overlooked by SSIS newcomers).
The OP's question is pretty much the use case for SSIS property expressions.
To pass SSIS variables into the query string one would concatenate it into an expression set for the SqlStatementSource property:
"INSERT INTO MYLOG (COL1) SELECT " + #[System::MachineName]
This is not to suggest the accepted answer isn't a good pattern, as in general, the parameterised approach is safer (against SQL injection) and faster (on re-use) than direct query string manipulation. But for a system variable (as opposed to a user-entered string) this solution should be safe from SQL injection, and this will be roughly as fast or faster than a parameterised query if re-used (as the machine name isn't changing).
I never use it before but maybe you can check out the use of expression in Execute SQL task for that.
Or just put the whole query into an expression of a variable with evaluateAsExpression set to true. Then use OLE DB to do you insert
Along with #user756519's answer, Depending on your connection string, your variable names and SQLStatementSource Changes

SQL Server reports 'Invalid column name', but the column is present and the query works through management studio

I've hit a bit of an impasse. I have a query that is generated by some C# code. The query works fine in Microsoft SQL Server Management Studio when run against the same database.
However when my code tries to run the same query I get the same error about an invalid column and an exception is thrown. All queries that reference this column are failing.
The column in question was recently added to the database. It is a date column called Incident_Begin_Time_ts .
An example that fails is:
select * from PerfDiag
where Incident_Begin_Time_ts > '2010-01-01 00:00:00';
Other queries like Select MAX(Incident_Being_Time_ts); also fail when run in code because it thinks the column is missing.
Any ideas?
Just press Ctrl + Shift + R and see...
In SQL Server Management Studio, Ctrl+Shift+R refreshes the local cache.
I suspect that you have two tables with the same name. One is owned by the schema 'dbo' (dbo.PerfDiag), and the other is owned by the default schema of the account used to connect to SQL Server (something like userid.PerfDiag).
When you have an unqualified reference to a schema object (such as a table) — one not qualified by schema name — the object reference must be resolved. Name resolution occurs by searching in the following sequence for an object of the appropriate type (table) with the specified name. The name resolves to the first match:
Under the default schema of the user.
Under the schema 'dbo'.
The unqualified reference is bound to the first match in the above sequence.
As a general recommended practice, one should always qualify references to schema objects, for performance reasons:
An unqualified reference may invalidate a cached execution plan for the stored procedure or query, since the schema to which the reference was bound may change depending on the credentials executing the stored procedure or query. This results in recompilation of the query/stored procedure, a performance hit. Recompilations cause compile locks to be taken out, blocking others from accessing the needed resource(s).
Name resolution slows down query execution as two probes must be made to resolve to the likely version of the object (that owned by 'dbo'). This is the usual case. The only time a single probe will resolve the name is if the current user owns an object of the specified name and type.
[Edited to further note]
The other possibilities are (in no particular order):
You aren't connected to the database you think you are.
You aren't connected to the SQL Server instance you think you are.
Double check your connect strings and ensure that they explicitly specify the SQL Server instance name and the database name.
In my case I restart Microsoft SQL Sever Management Studio and this works well for me.
If you are running this inside a transaction and a SQL statement before this drops/alters the table you can also get this message.
I eventually shut-down and restarted Microsoft SQL Server Management Studio; and that fixed it for me. But at other times, just starting a new query window was enough.
If you are using variables with the same name as your column, it could be that you forgot the '#' variable marker. In an INSERT statement it will be detected as a column.
Just had the exact same problem. I renamed some aliased columns in a temporary table which is further used by another part of the same code. For some reason, this was not captured by SQL Server Management Studio and it complained about invalid column names.
What I simply did is create a new query, copy paste the SQL code from the old query to this new query and run it again. This seemed to refresh the environment correctly.
In my case I was trying to get the value from wrong ResultSet when querying multiple SQL statements.
In my case it seems the problem was a weird caching problem. The solutions above didn't work.
If your code was working fine and you added a column to one of your tables and it gives the 'invalid column name' error, and the solutions above doesn't work, try this: First run only the section of code for creating that modified table and then run the whole code.
Including this answer because this was the top result for "invalid column name sql" on google and I didn't see this answer here. In my case, I was getting Invalid Column Name, Id1 because I had used the wrong id in my .HasForeignKey statement in my Entity Framework C# code. Once I changed it to match the .HasOne() object's id, the error was gone.
I've gotten this error when running a scalar function using a table value, but the Select statement in my scalar function RETURN clause was missing the "FROM table" portion. :facepalms:
Also happens when you forget to change the ConnectionString and ask a table that has no idea about the changes you're making locally.
I had this problem with a View, but the exact same SQL code worked perfectly as a query. In fact SSMS actually threw up a couple of other problems with the View, that it did not have with the query. I tried refreshing, closing the connection to the server and going back in, and renaming columns - nothing worked. Instead I created the query as a stored procedure, and connected Excel to that rather than the View, and this solved the problem.