I have enabled CDC in a source database and created following packages.
Initial Load(CDC start -->Data flow ---> CDC end)
Incremental load(CDC start(get processing range) --> data flow -->cdc end (mark processing end)
These package run perfectly fine when i am running manually, but I am getting the following error message while running thru a scheduled job.
Data Flow Task:Error: "Problems when trying to get changed records
from dbo_AddonQuote. Reson-Invald column name '__&command_id"
Here is the cdc state value
ILUPDATE/CS/0x0000053600005CFD0002/CE/0x000005360000604F0004/IR/0x0000053600005CFD0002/0x0000053600005D140002/TS/2018-03-22T23:10:22.5173580/
As I told before this is not happening while I run manually.
Can anyone shed some light on whats happening here? or how to debug this issue?
I was the one who asked MS to add that column because I discovered a few CDC bugs. They added that column, but they did it in incorrect / inconsistent way.
Recently they released new CUs to fix a few CDC bugs, one was (likely) for your issue. Download the latest CU for your version or/and try to execute
sp_cdc_vupgrade
against the database enabled for CDC.
Before that, check
if your capture instance (cdc.dbo_AddonQuote_CT) has that column (__$command_id)
if CDC stored procedures ([cdc].[sp_batchinsert_xxxxx]) refer that column
if CDC functions ([cdc].[fn_cdc_get_net_changes_dbo_xxxxx) refer that column
BTW. We don't use SSIS CDC Data flow. It's better to create own solution. MS CDC get net changes functions are very slow in certain scenarios and in certain scenarios they return incorrect results. If you create your own methods to read the data from capture instances, it will be more reliable and faster.
The following change fixed my issue.
My old package
Get Processing --> Data flow --> Mark processed
Updated Package
CDC start --> Get processing --> Data flow --> Mark processed
Here is the reference link which explains about the issue
http://www.bradleyschacht.com/understanding-the-cdc-state-value/
None of the above solutions worked for me.
I'm using MS SQL 2017 with latest CU15.
I the end I didn't use CDC start in SSIS but I found official solution in Microsoft DWH certification materials which is also working ok only if you're not using CDC start.
Following example shows how to change:
cdc.fn_cdc_get_all_changes and cdc.fn_cdc_get_net_changes in order to avoid such problems in SSIS.
You need to chang __$command_id column to NULL for every value.
Here it is for [HumanResources] demo database with only one table called Employee. No other changes needed. Just run this for your db/table and your CDC in SSIS will work without any other modifications needed.
USE [HumanResources]
GO
EXEC sp_rename N'cdc.fn_cdc_get_all_changes_dbo_Employee', N'fn_cdc_get_all_changes_dbo_Employee_safe'
GO
CREATE FUNCTION cdc.fn_cdc_get_all_changes_dbo_Employee(
#from_lsn BINARY(10),
#to_lsn BINARY(10),
#row_filter_option NVARCHAR(30))
RETURNS TABLE
RETURN SELECT *, NULL AS __$command_id
FROM cdc.fn_cdc_get_all_changes_dbo_Employee_safe(
#from_lsn,
#to_lsn,
#row_filter_option)
GO
EXEC sp_rename N'cdc.fn_cdc_get_net_changes_dbo_Employee', N'fn_cdc_get_net_changes_dbo_Employee_safe'
GO
CREATE FUNCTION cdc.fn_cdc_get_net_changes_dbo_Employee (
#from_lsn BINARY(10),
#to_lsn BINARY(10),
#row_filter_option NVARCHAR(30))
RETURNS TABLE
RETURN SELECT *, NULL AS __$command_id
FROM cdc.fn_cdc_get_net_changes_dbo_Employee_safe (
#from_lsn,
#to_lsn,
#row_filter_option)
GO
I have very basic skills in developing SSIS packages; and getting errors while developing this new package. With this package, the SQLInstance is getting determined fine as can be seen in column mapping in the second picture. But it is not reading columns from the columns of a user table (IndexType column, in this case). This is the issue.
Tried below steps with no luck till now:
I set the VaidateExternalMetaData setting to False, still same error.
Already removed all columns one-by-one to identify whether it is issue with some specific data type, still same issue.
Created a brand new test package, same error in test package also.
Another package working fine in production with same settings with a user database. Copied the DataFlowTask component from it and used it new package, still same issue.
Please help. Many thanks.
It may be SQL server version. I had similar issue when using table variables or temp tables. You need to use with result set, similar to this:
EXEC('SELECT 43112609 AS val;')
WITH RESULT SETS
(
(
val VARCHAR(10)
)
);
Article here:
http://www.itprotoday.com/sql-server-2012-t-sql-glance-execute-result-sets
SQL can not tell what is being returned when using temp/table variablbes so you have to specify this. It is needed for some versions of SQL Server.
I recently upgraded to version 2016.3 from 2016.2. To be specific, I am currently using:
IntelliJ IDEA 2016.3
Build #IU-163.7743.44, built on November 17, 2016
I am using the DB2 (LUW) driver provided by the IDE in the example below but I have tried to use my own drivers and still get the same results.
After I upgraded, if I try and copy a timestamp from the Results pane of the Database Console tool window I do not get the full precision. I was able to copy the full timestamp in the previous version.
For example, my results pane shows something like this:
And this is what it looks like when I paste it here after copying it from the results pane: 2017-04-12 10:42:11
The only work around I have found it to cast the timestamp to a CHAR and then copy it from the results pane. This works but a pain especially since most of my queries end up being SELECT *.
Pasting: 2017-04-12-10.42.11.193944
Anybody have any ideas on how to fix this? Workarounds?
It is a bug. Will be fixed in IntelliJ IDEA 2017.1.2 update. Sorry for the inconvenience.
We're using two schemas in our project (dbo + kal).
When we are trying to create a view with the following SQL statement, Visual Studio shows as an error in the error list.
CREATE VIEW [dbo].[RechenketteFuerAbkommenOderLieferantenView]
AS
SELECT
r.Id as RechenkettenId,
r.AbkommenId,
r.LieferantId,
rTerm.GueltigVon,
rTerm.GueltigBis,
rs.Bezeichnung,
rs.As400Name
FROM
[kal].[Rechenkette] r
JOIN
[kal].[RechenketteTerm] rTerm ON rTerm.RechenketteId = r.Id
JOIN
[kal].[Basisrechenkette] br ON rTerm.BasisrechenketteId = br.Id
JOIN
[kal].[Rechenkettenschema] rs ON rs.Id = br.Id
WHERE
r.RechenkettenTyp = 0
The error message looks like this:
SQL71501: Computed Column: [dbo].[RechenketteFuerAbkommenOderLieferantenView].[AbkommenId] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects:
[kal].[Basisrechenkette].[r]::[AbkommenId], [kal].[Rechenkette].[AbkommenId], [kal].[Rechenkette].[r]::[AbkommenId], [kal].[Rechenkettenschema].[r]::[AbkommenId] or [kal].[RechenketteTerm].[r]::[AbkommenId].
Publishing the view and working is just fine, but its quite annoying to see the error message all the time when building our project having all the serious errors get lost in the shuffle of those sql errors.
Do you have any idea, what the problem might be?
I just found the solution. Although I can't read your (what appears to be German) enough to know if you're referring to system views, if so, a database reference to master must be provided. Otherwise, adding any other required database references should solve the problem.
This is described here for system views: Resolve reference to object information schema tables
and for other database references.
Additional information is provided here: Resolving ambiguous references in SSDT project for SQL Server
For me I was seeing SQL71501 on a user defined table type. It turned out that the table type's sql file in my solution wasn't set as build. As soon as I changed the build action from None to Build, the error dissapeared.
I know this is an old question but it was the first one that popped up when searching for the error.
In my case the errors were preventing me from executing the SqlSchemaCompare in Visual Studio 2017. The error however was for a table/index of a table that was not part of the solution any more. A simple clean/rebuild did not help.
A reload of the visual studio solution did the trick.
We have a project that contains a view that references a table valued function in another database. After adding the database reference that is required to resolve the fields used from the remote database, we were still getting this error. I found that the table valued function was defined by using "SELECT * FROM ..." which was old code created by someone not familiar with good coding practices. I replaced the "*" portion with the enumerated fields needed and compiled that function, then re-created the dacpac for that database to capture the resulting schema, and incorporated the new dacpac as the database reference. Woo Hoo! the ambiguous references went away! Seems that SSDT engine cannot (or does not) always have the ability to reach down into the bowels of the referenced dacpac to come back with all the fields. For sure, the projects I work on are normally quite large, so I think it makes sense to give the tools all the help you can when asking them to validate your code.
Although this is an old topic, it is highly ranked on search engines, so I will share the solution that worked for me.
I faced the same error code with a CREATE TYPE statement, which was in a script file in my Visual Studio 2017 SQL Server project, because I couldn't find how to add a user-defined type specifically from the interface.
The solution is that, in Visual Studio, there are many programmability file types, other than the ones you can see through a right-click > Add. Just select New Element and use the search field to find the element you are trying to create.
From the last paragraph of the blog post Resolving ambiguous references in SSDT project for SQL Server, which was linked in the answer https://stackoverflow.com/a/33225020/15405769 :
In my case, when I double clicked the file and opened it I found that
one of the references to ColumnX was not using the two part name and
thus SSDT was unable to determine which table it belonged to and
furthermore whether the column existed in the table. Once I added the
two part name. Bingo! I was down to no errors!
In my case, I got this error when I was trying to export the datatier application. The error was related to the link on a database user. To solve the problem, you need to log in to the server with read rights on system users.
In my case I just double click on the error and it will take me to the exact error on procedure and I noticed that table column is deleted or renamed but in SP its still using the old column name.
If you build an SSDT project you can get an error which says:
“SQL71502: Function: [XXX].[XXX] has an unresolved reference to object [XXX].[XXX].”
If the code that is failing is trying to use something in the “sys” schema or the “INFORMATION_SCHEMA” schema then you need to add a database reference to the master dacpac:
Add a database reference to master:
Under the project, right-click References.
Select Add database reference….
Select System database.
Ensure master is selected.
Press OK.
Note that it might take a while for VS to update.
(Note this was copied verbatim from the stack overflow question with my screenshots added: https://stackoverflow.com/questions/18096029/unresolved-reference-to-obj… - I will explain more if you get past the tldr but it is quite exciting! )
NOT TLDR:
I like this question on stack overflow as it has a common issue that anyone who has a database project that they import into SSDT has faced. It might not affect everyone, but a high percentage of databases will have some piece of code that references something that doesn't exist.
The question has a few little gems in it that I would like to explore in a little more detail because I don't feel that a comment on stack overflow really does them justice.
If we look at the question it starts like this:
If you're doing this from within Visual Studio, make sure that the file is set to "Build" within the properties.
I've had this numerous times and it really gets me everytime. SQL Build is case sensitive even though your collation isn't. Check the case is correct in agreement with the object and schema names that are referenced!
I have used mysql for some projects and recently I moved to postgresql. In mysql when I alter a table or a field the corresponding query will be displayed in the page. But such a feature was not found in postgresql(kindly excuse me if I'm wrong). Since the query was readily available it was very helpful for me to test something in the local database(without explicitly typing the query), copy the printed query and run it in the server. Now it seems like I've to manually do all the trick. Even though I'm familiar with the query operations,at times it can be pretty time consuming process. Can anybody help me? How can I get the corresponding query to get displayed in postgresql(like in mysql) whenever a change is made to the table?
If you use SELECT * FROM ... there should not be any reason for your output to not include newly added columns, no matter how you get your results - would that be psql in command line, PgAdmin3 or any other IDE.
After you add new columns, it is possible that these changes are still in open transaction in other window or SQL command - be sure to COMMIT such transaction. Note that your changes to data or schema will not be visible to any other database clients until transaction commits.
If your IDE still does not show changes, maybe you need to refresh list of tables or if that option is not available, restart your IDE. If that does not work still, maybe you should use better IDE.
If you have used SELECT field1, field2, ... FROM ... then you must add new fields into your SELECT statement(s) - but this would be true for any other SQL implementation, MySQL included.
You could use the LISTEN / NOTIFY mechanism in PostgreSQL to notify your client on altering the database schema.