After rebooting SQL Server 2005 Standard 9.0.3233, we have been experiencing the above error in some of our stored procedures which try to insert into a table variable from a specific column of a table. The base table has the column defined as varchar(10), but the table variable has the column being inserted into defined only as varchar(3). However, the SELECT statement only returns data with 3 or less characters.
We have not changed the data or the code base in any other way, and this is only happening on our production server. If I run the same query on a test server with the same SQL Server 2005 edition installed, but an older backup, the error does not occur. The same data is returned in both queries if the INSERT is removed, or the table variable column is extended to match the base table.
What I have noticed is that the execution plan is different when the same query is run on the two servers. On the server where the query works, there is a computed scalar operation which takes the column and does an implicit conversion to varchar(3), before it is then outputted to the nested loop join operation.
On the server that returns an error, there is a hash join and table scan of the base table instead. I have already tried to rebuild indices and update statistics on all tables involved, including using fullscan, and with the same stat_stream as in the server that works, but I can't get the same plan back.
For now we have fixed the few stored procedures which were broken by modifying the size of the table variable column, but I would like to know if there is a way to get the statistics and indices back so that they produce the same plans as before, in case there is more code out there which just hasn't executed yet.
This is known behavior and probably has nothing to do with your reboot. Effectively what's happening is that the optimizer is re-ordering the logical elements of your query for performance reason, but this is resulting in the truncation-error check being done before the WHERE clause's filtering.
The recommended solution is to wrap the column expression that gets assigned to your VARCHAR(3) in a CASE.. that duplicates the length test in your WHERE clause. I know that sounds illogical, but it usually fixes the problem.
Related
I have a pair of linked SQL servers: ServerA and ServerB. I want to write a simple INSERT INTO SELECT statement which will copy a row from ServerA's database to ServerB's database. ServerB's database was copied directly from ServerA's, and so they should have the exact same basic structure (same column names, etc.)
The problem is that when I try to execute the following statement:
INSERT INTO [ServerB].[data_collection].[dbo].[table1]
SELECT * FROM [ServerA].[data_collection].[dbo].[table1]
I get the following error:
Msg 16902, Level 16, State 48, Line 1
sp_cursor: The value of the parameter 'value' is invalid.
On the other hand, if I try to execute the following statement:
INSERT INTO [ServerB].[data_collection].[dbo].[table1] (Time)
SELECT Time FROM [ServerA].[data_collection].[dbo].[table1]
The statement works just fine, and the code is executed as expected. The above statement executes just fine, regardless of which or how many tables I specify to insert.
So my question here is why would my INSERT INTO SELECT statement function properly when I explicitly specify which columns to copy, but not when I tell it to copy everything using "*"? My second question would then be: how do I fix the problem?
Googling around to follow up on my initial hunch, I found a source I consider reliable enough to cite in an answer.
The 'value' parameter specified isn't one of your columns, it is the optional argument to sp_cursor that is called implicitly via your INSERT INTO...SELECT.
From SQL Server Central...
I have an ssis package that needs to populate a sql table with data
from a pipe-delimited text file containing 992 (!) columns per record.
...Initially I'd set up the package to contain a data flow task to use
an ole db destination control where the access mode was set to Table
or view mode. For some reason though, when running the package it
would crash, with an error stating the parameter 'value' was not valid
in the sp_cursor procedure. On setting up a trace in profiler to see
what this control actually does it appears it tries to insert the
records using the sp_cursor procedure. Running the same query in SQL
Server Management Studio gives the same result. After much testing and
pulling of hair out, I've found that by replacing the sp_cursor
statement with an insert statement the record populated fine which
suggests that sp_cursor cannot cope when more than a certain number
of parameters are attempted. Not sure of the figure.
Note the common theme here between your situation and the one cited - a bazillion columns.
That same source offers a workaround as well.
I've managed to get round this problem however by setting the access
mode to be "Table or view - fast load". Viewing the trace again
confirms that SSIS attempts this via a "insert bulk" statement which
loads fine.
I have a stored proc as the SQL command text, which is getting passed a parameter that contains a table name. The proc then returns data from that table. I cannot call the table directly as the OLE DB source because some business logic needs to happen to the result set in the proc. In SQL 2008 this worked fine. In an upgraded 2012 package I get "The metadata could not be determined because ... contains dynamic SQL. Consider using the WITH RESULT SETS clause to explicitly describe the result set."
The problem is I cannot define the field names in the proc because the table name that gets passed as a parameter can be a different value and the resulting fields can be different every time. Anybody encounter this problem or have any ideas? I've tried all sorts of things with dynamic SQL using "dm_exec_describe_first_result_set", temp tables and CTEs that contains WITH RESULT SETS, but it doesn't work in SSIS 2012, same error. Context is a problem with a lot of the dynamic SQL approaches.
This is latest thing I tried, with no luck:
DECLARE #sql VARCHAR(MAX)
SET #sql = 'SELECT * FROM ' + #dataTableName
DECLARE #listStr VARCHAR(MAX)
SELECT #listStr = COALESCE(#listStr +',','') + [name] + ' ' + system_type_name FROM sys.dm_exec_describe_first_result_set(#sql, NULL, 1)
exec('exec(''SELECT * FROM myDataTable'') WITH RESULT SETS ((' + #listStr + '))')
So I ask out of kindness, by why on God's green earth are you using an SSIS Data Flow task to handle dynamic source data like this?
The reason you're running into trouble is because you're perverting every purpose of an SSIS Data flow task:
to extract a known source with known metadata that can be statically typed and cached in design-time
to run through a known process with straightforward (and ideally asynchronous) transformations
to take that transformed data and load it into a known destination also with known metadata
It's fine to have parameterized data sources that bring back different data. But to have them bring back entirely different metadata each time with no congruity between the different sets is, frankly, ridiculous, and I'm not entirely sure I want to know how you handled all your column metadata in the working 2008 package.
This is why it wants you add a WITH RESULTS SET to the SSIS query - so it can generate some metadata. It doesn't do this at runtime - it can't! It has to have a known set of columns (because it aliases them all into compiled variables anyway) to work with. It expects the same columns every time it runs that Data Flow Task - the exact same columns, down to the names, the types, and the constraints.
Which leads to one (terrible, terrible) solution - just stick all the data into a temporary table with Column1, Column2 ... ColumnN and then use the same variable you're using as the table name parameter to conditionally branch your code and do whatever you want with the columns.
Another more sane solution would be to create a data flow task for each of your source tables, and use your parameter in a precedence constraint to just pick which data flow task should run.
For a solution this poorly tailored for an out-of-the-box ETL, you should also highly consider just rolling your own in C# or a script task instead of the Data Flow Task provided by SSIS.
In short, please don't do this. Think of the children (packages)!
I've used CozyRoc Dynamic DataFlow Plus to achieve this.
Using configuration tables to build the SQL Select statements, I have a single SSIS package that loads data from Oracle and Sybase (or any OLEDB source) to MS SQL. Some of the result sets are in the millions of rows and performance is excellent.
Instead of writing a new package every time a new table is needed, this can be configured in minutes and run on a the pre-tested and robust existing package.
Without it I would have been up for writing hundreds of packages.
My project is in Visual Foxpro and I use MS SQL server 2008. When I fire sql queries in batch, some of the queries don't execute. However, no error is thrown. I haven't used BEGIN TRAN and ROLLBACK yet. What should be done ??
that all depends... You don't have any sample of your queries posted to give us an indication of possible failure. However, one thing I've had good response with from VFP to SQL is to build into a string (I prefer using TEXT/ENDTEXT for readabilty), then send that entire value to SQL. If there are any "parameter" based values that are from VFP locally, you can use "?" to indicate it will come from a variable to SQL. Then you can batch all in a single vs multiple individual queries...
vfpField = 28
vfpString = 'Smith'
text to lcSqlCmd noshow
select
YT.blah,
YT.blah2
into
#tempSqlResult
from
yourTable YT
where
YT.SomeKey = ?vfpField
select
ost.Xblah,
t.blah,
t.blah2
from
OtherSQLTable ost
join #tempSqlResult t
on ost.Xblah = t.blahKey;
drop table #tempSqlResult;
endtext
nHandle = sqlconnect( "your connection string" )
nAns = sqlexec( nHandle, lcSqlCmd, "LocalVFPCursorName" )
No I don't have error trapping in here, just to show principle and readability. I know the sample query could have easily been done via a join, but if you are working with some pre-aggregations and want to put them into temp work areas like Localized VFP cursors from a query to be used as your next step, this would work via #tempSqlResult as "#" indicates temporary table on SQL for whatever the current connection handle is.
If you want to return MULTIPLE RESULT SETs from a single SQL call, you can do that too, just add another query that doesn't have an "into #tmpSQLblah" context. Then, all instances of those result cursors will be brought back down to VFP based on the "LocalVFPCursorName" prefix. If you are returning 3 result sets, then VFP will have 3 cursors open called
LocalVFPCursorName
LocalVFPCursorName1
LocalVFPCursorName2
and will be based on the sequence of the queries in the SqlExec() call. But if you can provide more on what you ARE trying to do and their samples, we can offer more specific help too.
I've hit a bit of an impasse. I have a query that is generated by some C# code. The query works fine in Microsoft SQL Server Management Studio when run against the same database.
However when my code tries to run the same query I get the same error about an invalid column and an exception is thrown. All queries that reference this column are failing.
The column in question was recently added to the database. It is a date column called Incident_Begin_Time_ts .
An example that fails is:
select * from PerfDiag
where Incident_Begin_Time_ts > '2010-01-01 00:00:00';
Other queries like Select MAX(Incident_Being_Time_ts); also fail when run in code because it thinks the column is missing.
Any ideas?
Just press Ctrl + Shift + R and see...
In SQL Server Management Studio, Ctrl+Shift+R refreshes the local cache.
I suspect that you have two tables with the same name. One is owned by the schema 'dbo' (dbo.PerfDiag), and the other is owned by the default schema of the account used to connect to SQL Server (something like userid.PerfDiag).
When you have an unqualified reference to a schema object (such as a table) — one not qualified by schema name — the object reference must be resolved. Name resolution occurs by searching in the following sequence for an object of the appropriate type (table) with the specified name. The name resolves to the first match:
Under the default schema of the user.
Under the schema 'dbo'.
The unqualified reference is bound to the first match in the above sequence.
As a general recommended practice, one should always qualify references to schema objects, for performance reasons:
An unqualified reference may invalidate a cached execution plan for the stored procedure or query, since the schema to which the reference was bound may change depending on the credentials executing the stored procedure or query. This results in recompilation of the query/stored procedure, a performance hit. Recompilations cause compile locks to be taken out, blocking others from accessing the needed resource(s).
Name resolution slows down query execution as two probes must be made to resolve to the likely version of the object (that owned by 'dbo'). This is the usual case. The only time a single probe will resolve the name is if the current user owns an object of the specified name and type.
[Edited to further note]
The other possibilities are (in no particular order):
You aren't connected to the database you think you are.
You aren't connected to the SQL Server instance you think you are.
Double check your connect strings and ensure that they explicitly specify the SQL Server instance name and the database name.
In my case I restart Microsoft SQL Sever Management Studio and this works well for me.
If you are running this inside a transaction and a SQL statement before this drops/alters the table you can also get this message.
I eventually shut-down and restarted Microsoft SQL Server Management Studio; and that fixed it for me. But at other times, just starting a new query window was enough.
If you are using variables with the same name as your column, it could be that you forgot the '#' variable marker. In an INSERT statement it will be detected as a column.
Just had the exact same problem. I renamed some aliased columns in a temporary table which is further used by another part of the same code. For some reason, this was not captured by SQL Server Management Studio and it complained about invalid column names.
What I simply did is create a new query, copy paste the SQL code from the old query to this new query and run it again. This seemed to refresh the environment correctly.
In my case I was trying to get the value from wrong ResultSet when querying multiple SQL statements.
In my case it seems the problem was a weird caching problem. The solutions above didn't work.
If your code was working fine and you added a column to one of your tables and it gives the 'invalid column name' error, and the solutions above doesn't work, try this: First run only the section of code for creating that modified table and then run the whole code.
Including this answer because this was the top result for "invalid column name sql" on google and I didn't see this answer here. In my case, I was getting Invalid Column Name, Id1 because I had used the wrong id in my .HasForeignKey statement in my Entity Framework C# code. Once I changed it to match the .HasOne() object's id, the error was gone.
I've gotten this error when running a scalar function using a table value, but the Select statement in my scalar function RETURN clause was missing the "FROM table" portion. :facepalms:
Also happens when you forget to change the ConnectionString and ask a table that has no idea about the changes you're making locally.
I had this problem with a View, but the exact same SQL code worked perfectly as a query. In fact SSMS actually threw up a couple of other problems with the View, that it did not have with the query. I tried refreshing, closing the connection to the server and going back in, and renaming columns - nothing worked. Instead I created the query as a stored procedure, and connected Excel to that rather than the View, and this solved the problem.
Using SQL Server 2000 and Microsoft SQL Server MS is there a way to create a delimited string based upon an unknown number of columns per row?
I'm pulling one row at a time from different tables and am going to store them in a column in another table.
A simple SQL query can't do anything like that. You need to specify the fields you are concatenating.
The only method that I'm aware of is to dynamincally build a query for each table.
I don't recall the structure of MSSQL2000, so I won't try to give an exact example, maybe someone else can. But there -are- system tables that contain table defintions. By parsing the contents of those system tables you can dynamically build the necessary query for each source data table.
TSQLthat writes TSQL, however, can be a bit tricky to debug and maintain :) So be careful how you structure everything...
Dems.
EDIT:
Or just do it in your client application.