Thank you in advance for your help.
I am getting the below error while creating a user in Odoo, initially, it was working fine but suddenly it started showing this error:
The operation cannot be completed: another model requires the record
being deleted. If possible, archive it instead.
Model: Unknown (Unknown), Constraint:
digest_digest_res_users_rel_digest_digest_id_fkey
Please provide me help
It seems you have installed, uninstalled module on the database and it stored constraint in the table. So if we drop that constraint, system will work smoothly.
You may try with this:
Login in your postgresql database.
Copy this query alter table digest_digest drop constraint and press tab and check if you find above digest_digest_res_users_rel_digest_digest_id_fkey constraint.
Related
I Have 2 schema in my database. One of the schema has a package which is using a another schema's table. The Table from another schema, which is frequently drop and create with same name after a time span.
So, when the table has drop at the same moment the package from another schema will be invalid due the table.
Is there any way to provide a grant access will remain as it is after the table drop or any way to recompile automatically that package. Please help me on this.
Thanks in Advance.
Unfortunately there isn't any automatic task to do this. you should write code to automate this. please check http://psoug.org/reference/ddl_trigger.html. it will help you.
Packages will always become invalid if one of their dependant objects is dropped. Recompiling the package while the table does not exist will not work, the package will still be be invalid.
Consider deleting all rows from the table (or truncating it) instead of dropping and recreating it.
Otherwise, try to minimise the time between dropping and recreating the table. Make sure all relevant grants and synonyms are created at the same time as the table.
The only workaround I can think of is to create a copy of the table before you drop it. If you use a synonym to reference the table in the package, you could "trick" the package into using this copy of the table while you drop the actual one. The package would still become invalid while you change the synonym, but this would only be for a short amount of time.
The only ideal solution to this problem is to stop dropping the table.
On dropping a unique constraint, following error occurs,
ORA-04098: trigger 'SYS.XDB_PI_TRIG' is invalid and failed re-validation
Having no permission to recompile this trigger.
What could be the problem here and is there any way we can solve this?
This error reflects a compilation/authorization failure for a mentioned trigger. This trigger was invalid so failed to be retrieved for execution. You can run
SHOW ERRORS TRIGGER SYS.XDB_PI_TRIG;
in order to have a bigger picture of that error.
Trigger may need to be recompiled. Running:
alter trigger SYS.XDB_PI_TRIG compile
will recompile this trigger.
Common case is when user has privileges only to run and not change respective triggers. In that case you may need to recompile the Trigger as SYSDBA.
I wouldthink that you may be dropping a primary key check what constraint your dropping. if your dropping a pk and its being used as a foreign key then this would invalidate the trigger.
Found a solution to this,
The XDB Schema is invalid for the Database. So we are unable to drop any objects in this Database. So making the XDB schema valid, has solved this problem.
Thanks for your answers!
Need some advice. I have a sequence container that failed when i tried to execute it. I found out that there was a difference in constraints between some columns of source and destination tables.
Then, I tried to uncheck the "Check Constraints" option in the destination and it was a success.
I tried to replicate the error back by checking again the "Check Constraints" Option, and tried to run the container and now it is still running successfully. I can no longer replicate the failed job before. Kindly advise as what could be possibly causing this issue.
I understand that this "Check constraints" setting specifies that the dataflow pipeline engine will validate the incoming data against the constraints of target table.
This is a difficult question to answer without knowing the constraints on your tables, what data you are inserting, and in what order. My guess would be, that something like this happened:
Table A has foreign key reference to Table B. At first, both tables have 0 records.
With "Check Constraints" option enabled, you attempt to load records into Table A. This fails, because of the reference to Table B, and at this point in time, Table B does not yet contain any records.
Then you uncheck "Check Constraints", and can now load records into both tables without error.
Then you re-check "Check Constraints". Now, both Table A and Table B contains data, which means that you can insert data into Table A again, without violating the constraints.
I've just had something very strange happen to me with a Firebird database.
I was trying to create a table, and the CREATE TABLE failed for some reason. But now it's stuck in a very strange state:
If I try to CREATE TABLE again with the same table name, it gives an error: the table already exists. But if I try to DROP TABLE that table, it gives an error: the table does not exist. Trying to SELECT * FROM that table gives the "table does not exist" error, and the name does not show up in the metadata query:
SELECT RDB$RELATION_NAME
FROM RDB$RELATIONS
WHERE RDB$SYSTEM_FLAG=0
So for some reason, the table really seems to not be there, but I can't create it because something somewhere indicates that it does exist.
Does anyone have any idea how to fix this? I've already tried closing all connections to that database, which has helped with inconsistency issues in the past, but this time it doesn't help.
You didn't give details about what was the error when you tried to create the table, so I cannot comment it. But RDB$RELATIONS is not the only system table affected when you create a table. Maybe you are now in an inconsistent situation where some information about that table exists in some system tables and doesn't exists in others.
Another option is corrupted indexes in the system tables, so the record is not there but the index think it still exists.
Try to do a backup/restore and see if it helps. It it doesnt work, try to search for records related to that "non created" table in the other system tables (RDB$RELATION_FIELDS, etc) and if you find any, try to delete them.
As a last option, you may create a new clean database with correct metadata and pump your data to it using IBDataPump.
I am using RedBean ORM. In order to create the schema I used the standard redbean approach of inserting data so that Redbean would auto-fit the schema to suit my needs. I put this in a script which would basically be used to build the schema when i need to initialize my database.
The problem is that RedBean keeps a row or 2 in each table (the ones that I initially inserted to get redbean to build the schema).
If it were a normal database, to erase all rows i would just drop the schema and rebuild it, but in this case that's not possible since the initial rows would still exist.
Unfortunately there isn't too much Redbean Q/A out there. Anyone know how I would do this using the Redbean interface?
I have tried
$listOfTables = R::$writer->getTables();
foreach($listOfTables as $table)
{
R::wipe($table);
}
Of course this doesn't work though. (It doesn't TRUNCATE the tables in the correct order so I get an error about another table using this key as a foreign link. It simply iterates in ABC order)
Fatal error: Uncaught [42000] - SQLSTATE[42000]: Syntax error or access violation: 1701 Cannot truncate a table referenced in a foreign key constraint (`redbeandb`.`research`, CONSTRAINT `research_ibfk_1` FOREIGN KEY (`ownEducationHistory_id`) REFERENCES `redbeandb`.`educationhistory` (`id`)) thrown in C:\Users\Rod\nginx-1.0.12\html\rb.php on line 105
If someone has a (redbean api) solution, it would be much appreciated. And hopefully this question can be beneficial to building up more RedBean Q/A here on Stackoverflow.
Use
R::nuke();
Yes, it will drop all tables but since RedBeanPHP creates all tables on the fly this is not a problem.
I know this is an old post but I figured I'd help out someone finding this today. You can tell mysql to ignore foreign key checks if you don't care about data integrity (plan on wiping all related tables).
R::exec('SET FOREIGN_KEY_CHECKS = 0;');
$listOfTables = R::$writer->getTables();
foreach($listOfTables as $table)
{
R::wipe($table);
}
R::exec('SET FOREIGN_KEY_CHECKS = 1;');