Need some advice. I have a sequence container that failed when i tried to execute it. I found out that there was a difference in constraints between some columns of source and destination tables.
Then, I tried to uncheck the "Check Constraints" option in the destination and it was a success.
I tried to replicate the error back by checking again the "Check Constraints" Option, and tried to run the container and now it is still running successfully. I can no longer replicate the failed job before. Kindly advise as what could be possibly causing this issue.
I understand that this "Check constraints" setting specifies that the dataflow pipeline engine will validate the incoming data against the constraints of target table.
This is a difficult question to answer without knowing the constraints on your tables, what data you are inserting, and in what order. My guess would be, that something like this happened:
Table A has foreign key reference to Table B. At first, both tables have 0 records.
With "Check Constraints" option enabled, you attempt to load records into Table A. This fails, because of the reference to Table B, and at this point in time, Table B does not yet contain any records.
Then you uncheck "Check Constraints", and can now load records into both tables without error.
Then you re-check "Check Constraints". Now, both Table A and Table B contains data, which means that you can insert data into Table A again, without violating the constraints.
Related
I have a staging table without any constraints in my Azure SQL database (Azure SQL database 12.0.2000.8). I want to insert the data from the Staging table into the "real" table on which multiple constraints are set. When inserting the data, I use a statement of the kind
INSERT INTO <someTable> SELECT <columns> FROM StagingTable;
Now I only get the first error when violating some constraints. However, for my use case, it is important to get all violations, so they can be resolved altogether.
I have tried using TRY...CATCH mechanisms, however, this will throw an error on the first error and run the catch clause, but it will not continue with the other data. Note that the correct data that has no violations should not be inserted, so the whole insert statement can be rolled back on one error, however, I want to see all violations to be able to correct them all without having to run the insert statement multiple times to get all errors.
EDIT:
The types of constraints that need to be checked are foreign key constraints, NOT NULL constraints, duplicate keys. No casting is done, so no need to check for conversions.
There are couple of options:
If you want to catch row level information, you have to go for cursors or while loop and try to insert each row in TRY CATCH block and see if you are getting any error, and log the same.
Create another table similar to main table(say, MainCheckTable) with all constraints and disable all the constraints and load the data.
Now, you can leverage DBCC CHECKCONSTRAINTS to see all the constraint violations.Read more on this .
USE DBName;
DBCC CHECKCONSTRAINTS(MainCheckTable) WITH ALL_CONSTRAINTS;
First, don't look at your primary table(s). Look at the related tables e.g. lookups etc. Populate these first. Once you have populated the related tables (i.e.) satisfy all related constraints, then add the data.
You need to work backwards from the least constrained tables to the most constrained if that makes sense.
You should check that your related tables have the required reference values/fields that you intend to insert. This is easy to do, since you already have a staging table.
I am new to SSIS packages and just require assistance on how to transfer data from one data source onto my own database.
Below is my data flow:
Now I have a ODBC Source (Http_Requests Source) where I take data from a PostgreSQL database table (see screenshot below for table columns and data):
Below is the OLE DB destination where it has the table I want to transfer the data to (this table is currently blank):
Now I tried to start debugging to extract the data but I get a few errors (displayed below):
I am a complete novice so I would like some guidance on what I need to include in order to get this SSIS package to transfer data across. Would I need to include a merge statement and how do I apply it. I heard you can write a merge as a proc and call on the proc as a sql command. Does that mean I will need to write a proc in SSMS and then call on it within the OLE DB Destination?
If somebody can provide an example and screenshot then that would be very helpful as I am really new to SSIS.
Thank you,
Check constraint on destination table or disable them before running it.
Below are query you can use.
-- Disable all table constraints
ALTER TABLE YourTableName NOCHECK CONSTRAINT ALL
-- Enable all table constraints
ALTER TABLE YourTableName CHECK CONSTRAINT ALL
Tick keep identity
box or drop primary key on the table. After you apply the changes do not forget to refresh metadata by opening the mappings in sis.
the error means that PerformanceId is an IDENTITY column on your destination table. IDENTITY columns are read only unless you tell it otherwise. So if we were in tSQL to be able to insert IDENTITY we would turn on IDENTITY_INSERT. Because you are in SSIS you can accomplish the same thing by checking the "keep identity" box.
HOWEVER when ever you get an error like this it is usually a sign that you should NOT be mapping ID to Performance ID. The question you have to ask is the Identity from your source supposed to be the identity of the destination table? Usually not, most of the time it would be another column as a surrogate key. Then you have to understand if it is even possible. because if there is a unique constraint or primary key then the identity cannot repeat which means you have to know that your source's id column will not cause a duplicate primary key violation.
More than likely the actual fix if for you to uncheck ID from the source and ignore the value.
The column PerformanceID (in the target) is almost certainly an identity column and that is why it is not working. You may not want to transfer it (and have SQL Server generate values for PerformanceID or you can check 'Keep Identity.'
I'm in the process of testing a large number of schema changes to upgrade our db to run with the latest version of a packaged product we have.
At this point I'm not interested in the data contained in the db, only the schema (i.e. the tables, views, constraints, keys, stored procedures, etc.).
My testing entails running scripts, resolving errors, an re-running the scripts. If I want to re-run the scripts I need to first restore the db to get it back to a known state. Restoring the db is very time consuming as it has lots of data. I would like to "slim down" the db and remove as much data as possible. That way it will be quicker to restore the db and re-run my scripts
When I attempt to delete records from many of the tables ("delete from table-name") I run into constraint errors and the command stops.
Is there a way to allow the command to continue and, in effect, delete all the records in the table where there aren't constraint issues? In other words I'd like the command to ignore errors and continue to delete all the records it can.
Any Suggestions would be greatly appreciated.
Byron above makes a good point; you can disable FOREIGN KEY or CHECK constraints with the ALTER TABLE command:
ALTER TABLE TableName NOCHECK CONSTRAINT ALL
Then do your work, and when finished,
ALTER TABLE TableName CHECK CONSTRAINT ALL
to re-enable constraints. Be careful, though: if you leave your data in an inconsistent state that violates constraints once you re-enable them, future updates can fail due to constraint errors.
Sources:
http://weblogs.sqlteam.com/joew/archive/2008/10/01/60719.aspx
http://technet.microsoft.com/en-us/library/ms190273.aspx
I have an issue related to data in sql server. In my database some of the constraint were not enabled i.e. they were not checked , After some time working on it we found this issue that a parent rows can be deleted without deleting child, which was an issue. I enabled all the constraint in the database using query
ALTER TABLE tbl_name CHECK CONSTRAINT ALL
above query was executed on all the tables of that database without any error . But my concern is whether it will work or not , if it will work on the existing data then what will happen to that data whose parent table data has been deleted.
I want to know is there any way such that I can validate such data data whose parent record doesn't exist in the entire database. There are about 270 constraint containing FOREIGN KEY AND UNIQUE KEY . I don't want to go for manual option.
Please help me out.
ALTER TABLE tbl_name CHECK CONSTRAINT ALL
only re-enables the constraints. Importantly, the constraints are not checked against the existing data in the database (nor are they trusted by the optimizer). If you want that to occur, you need to specify WITH CHECK as well:
ALTER TABLE tbl_name WITH CHECK CHECK CONSTRAINT ALL
(And yes, the word CHECK appears twice)
If you execute this, and there are orphaned child rows (or other invalid constraints), then the ALTER TABLE will fail with an error message. There's nothing SQL Server can do to fix this issue - it's for you to decide whether to a) remove the orphaned rows, or b) to re-create, in some manner, a suitable parent row for them.
You can also add the 'ON DELETE CASCADE' code to the end of foreign keys to prevent orphaned child rows from persisting.
This is more of a 'better practice' going forward than a solution, but I believe Damien_The_Unbeliever has answered your main question.
I've written a SQL query for a report that creates a permanent table and then performs a bunch of inserts and updates to get all the data, according to company policy. It runs fine in SQL Server Management Studio and in Crystal Reports 2008 on my machine. However, when I schedule it to run on the server with SAP BusinessObjects Central Management Console, it fails with the error "Associated statement not prepared."
I have found that changing this permanent table to be a temp table makes the query work. Why would this be?
Some research shows that this error is sometimes sent instead of the true error. Other people reporting it talk of foreign key and (I would also assume) duplicate key errors.
Things I would check:
Does your permanent table have any unique constraints that might be violated? Or any foreign key constraints?
Are you creating indexes on the table after it has been created?
Are you creating any views over this permanent table?
What happens if the table already exists before the job is run?
What happens to the table if the job fails?
Are there any intermediate steps (such as within a stored procedure) that might involve additional temp or permanent tables?
ETA: Also check what schema the permanent table belongs to: is it usually created with "dbo"? Are you specifying that explicitly? Is there any chance that there might be a permissions problem?
That is often a generic error. Are you able to run it on the server as the account that it is scheduled to run as? It is most likely a permission error or constraint issue.
Assuming you really need a regular table, why it's not possible to create the permanent table once, vs creating it every time you run the query?
Recreating regular user table each time query runs does not seem right. But to make it work you may try to recreate the table in a separate batch or query (e.g. put GO in the script, that splits it into separate queries).
Regarding why it happens, I'm thinking about statement caching. Server compiles the query and stores the result for some time in case same query has to run again. So it's my speculation that it tries to run the compiled query which refers to the table you have already dropped and recreated under the same name. Name is the same, but physically it's a new table. You could hit some bug in the server this way. Just a speculation, it can be different kind of problem.
Without seeing code it's a guess, but being that you are creating a permanent table everytime you run the report, I assume you must be dropping the table at some point? (Or you'd have a LOT of tables building up over time.)
I suggest a couple angles to consider:
1) Make certain to prefix tables (perhaps by a session ID or soemthing) if you are concerned about concurrency/locking issues and the like so each report run has a table exclusive to itself.
2) If you are dropping the table at the end, instead adjust your logic to leave the table be. Write code that drops when you (re)start the operation. It's possible the report is clinging to the table and you are destroying it prematurely.