I have a staging table (stage_enrolments) and a production table (enrolments). The staging table isn't partitioned, the production table is. I'm trying to use the ALTER TABLE SWITCH statement to transfer the records in the staging table to production.
ALTER TABLE dbo.stage_enrolments
SWITCH TO dbo.enrolments PARTITION #partition_num;
However, when I execute this statement I get the following error:
ALTER TABLE SWITCH statement failed. Target table 'Academic.dbo.enrolments' is referenced by 1 indexed view(s), but source table 'Academic.dbo.stage_enrolments' is only referenced by 0 matching indexed view(s)
I have the same indexed view defined on dbo.stage_enrolments as I do on dbo.enrolments - although the view on enrolments is partitioned. I've tried recreating the views and their indexes checking that all options are the same but I get the same result. If I remove the index from the dbo.enrolments view then it works fine.
I have it working on another set of tables that have indexed views so I'm not sure why it isn't working for these. Does anyone have an idea as to why this may be occurring? What else should I check?
The problem has now been sorted. I've recreated the indexed view once again and it is now working. I haven't actually changed anything though other than the name of the index so I'm not sure what the problem was.
Related
I have to run a script for a company. I just get the same error every time.
The query:
DELETE FROM WMO
WHERE (clientnr = ****** AND number_message = *****)
The error:
ORA-01752: cannot delete from view without exactly one key-preserved
table
What did I wrong?
Thnx!
Database views are in general projection of one or more tables. It is a SELECT statement over one or more tables to be specific. For database engine it is impossible to decide what it should delete and from which table unless the view is constructed from single table.
The best solution is to run DELETE command against tables that are used to construct the view.
Additional information:
ORA-01752: cannot delete from view without exactly one key-preserved table
I run a query on Databricks:
DROP TABLE IF EXISTS dublicates_hotels;
CREATE TABLE IF NOT EXISTS dublicates_hotels
...
I'm trying to understand why I receive the following error:
Error in SQL statement: AnalysisException: Cannot create table ('default.dublicates_hotels'). The associated location ('dbfs:/user/hive/warehouse/dublicates_hotels') is not empty but it's not a Delta table
I already found a way how to solve it (by removing it manually):
dbutils.fs.rm('.../dublicates_hotels',recurse=True)
But I can't understand why it's still keeping the table?
Even though that I created a new cluster (terminated the previous one) and I'm running this query with a new cluster attached.
Anyone can help me to understand that?
I also faced a similar problem, then tried the command line CREATE OR REPLACE TABLE and it solved my problem.
DROP TABLE & CREATE TABLE work with entries in the Metastore that is some kind of database that keeps the metadata about databases and tables. There could be the situation when entries in metastore don't exist so DROP TABLE IF EXISTS doesn't do anything. But when CREATE TABLE is executed, then it additionally check for location on DBFS, and fails if directory exists (maybe with data). This directory could be left from some previous experiments, when data were written without using the metastore.
if the table created with LOCATION specified - this means the table is EXTERNAL, so when you drop it - you drop only hive metadata for that table, directory contents remains as it is. You can restore the table by CREATE TABLE if you specify the same LOCATION (Delta keeps table structure along with it's data in the directory).
if LOCATION wasn't specified while table creation - it's a MANAGED table, DROP will destroy metadata and directory contents
This is a really strange question. I have a dynamic SQL stored procedure that inserts data into a static table in the queried database. This table is referenced quite a lot in the query. So when I needed to change this table and add two new columns I deleted it and used the import wizard (Excel spread sheet) to create a new one and gave it that same name, so I didn't have to amend the SP. The SP works fine, however I also have this query outside of dynamic SQL and when I run it, it now fails.
At first I couldn't work out why but when I saw that it was failing on the INSERT INTO the newly created (but with the same name) table because there were too many columns to match the table. I ran a simple SELECT * FROM and it brought back the old table with the 3 columns it used to have and not the new table with 5 columns?
How can this table still exist if its been deleted? Its like a ghost table still remains?
Thanks
First check your temp table exists or not in your database.IF exists means drop your temp table and then create new one.
IF EXISTS(SELECT 1 FROM tempdb.dbo.sysobjects WHERE xtype in ('U') AND id =
object_id(N'tempdb..#your_tableName') )
DROP TABLE #your_tableName;
This is known behaviour with views and can be fixed by dropping and recreating the view. Not sure why it is acting the same way for a table. Possibilities I can think of -
It is a view
The new table was not created in the same database
Try dropping and recreating the sproc, for what it is worth
I have a table T_SG_LTA_TRANSACTION_TYPE in source database.
I want to move it into a target database.
I have created a materialized view log in source database.
CREATE MATERIALIZED VIEW LOG ON T_SG_LTA_TRANSACTION_TYPE WITH PRIMARY KEY, ROWID;
Then I created materialized view in target database with following query.
CREATE MATERIALIZED VIEW T_SG_LTA_TRANSACTION_TYPE
ON PREBUILT TABLE
REFRESH FAST ON DEMAND
FOR UPDATE
AS
SELECT TRANSACTION_ID,
TRANSACTION_DESCRIPTION,
FILE_TYPE_ID
FROM T_SG_LTA_TRANSACTION_TYPE#EBAODWH_SRC_1_GS_AIG;
But when I refresh materialized view , I am unable to load the data which is already present in T_SG_LTA_TRANSACTION_TYPE(SOURCE DB).
BEGIN
DBMS_MVIEW.refresh('T_SG_LTA_TRANSACTION_TYPE');
END;
The data which is updated in source table after creation of materialized view, is only loading to materialized view . But I want to get whole data from source table(modified and unmodified) into materialized view. And I need this unmodified data only once when mview is created. Please suggest the solution. Thanks in advance.
You seem to be using the ON PREBUILT TABLE clause incorrectly:
The ON PREBUILT TABLE clause lets you register an existing table as a preinitialized materialized view.
And
Caution:
This clause assumes that the table object reflects the materialization of a subquery. Oracle strongly recommends that you ensure that this assumption is true in order to ensure that the materialized view correctly reflects the data in its master tables.
You're essentially saying that T_SG_LTA_TRANSACTION_TYPE already exists on the target database - not the source - and contains the current state of the source table; which, since you're missing data, is not true.
When you refresh Oracle is only looking for changes since the view was created, as it's supposed to; it relies on the MV log to identify what has changed.
Drop the MV in the target database, without the preserve table clause, and then recreate it without the on prebuilt table clause.
Make sure you're dropping the right thing in the right database/schema, of course...
It isn't clear if the (empty) table already existed in your target DB before you started; or you've run this a couple of times and dropped the MV with preserve table - and either the source was empty last time, or you truncated the target table afterwards - or perhaps you tried to export/import the initial state but just got the metadata and not the data. If the table you said existed on the target DB did not, in fact, exist then Oracle would have thrown an ORA-12059 exception when you tried to create the MV. I suspect you'd created an empty table and then tried to convert it to an MV, but if you do that it won't get any data from the source DB, as you've seen.
After dropping the table, found that the index created on the columns of the dropped table is gone. I just want to know what happens after that. Could someone please explain?
What all are the others getting dropped along with table drop?
In Oracle when dropping a table
all table indexes and domain indexes are dropped
any triggers defined on the table are dropped
if table is partitioned, any corresponding local index partitions are dropped
if the table is a base table for a view or if it is referenced in a stored procedure, function, or package, then these dependent objects are invalidated but not dropped
In Postgres
DROP TABLE always removes -
1. any indexes
2. rules
3. triggers
4. constraints
that exist for the target table.
MySQL also drops table indexes when tables are dropped.
For more info, see Does dropping a table in MySQL also drop the indexes?
By default, MS Sql Server also drops indexes when a table is dropped.
(Observed in version 13.0.4206.0.)