Can one drop multiple views at once in Impala - sql

I have a database with three views. I am trying to execute a command to drop all three at once.
The Impala Guide shows it is possible to drop one view at a time using
DROP VIEW IF EXISTS mydb.view_name
But it does not suggest a method for dropping more than one view, the the same time this page from the Guide does not suggest this would be a restriction .
If I were using SQL server (and other versions of SQL), I could follow the method shown in this tutorial, separating the views by a ,.
DROP VIEW IF EXISTS
mydb.view_v1,
mydb.view_v2,
mydb.view_v3;
I would expect this to drop the three views from the database.
However when I try this in Impala I get the below error:
AnalysisException: Syntax error in line 2:undefined: ...exists mydb.view_v1, mydb.view_v2, mydb... ^ Encountered: COMMA Expected: ADD, ALTER, AS, CACHED, CHANGE, COMMENT, DROP, FROM, LIKE, LOCATION, PARTITION, PARTITIONED, PRIMARY, PURGE, RECOVER, RENAME, REPLACE, ROW, SELECT, SET, SORT, STORED, STRAIGHT_JOIN, TBLPROPERTIES, TO, UNCACHED, VALUES, WITH CAUSED BY: Exception: Syntax error
and all views remain.

Clearly, the error indicates that in Impala terms your statement is not syntactically correct (and not supported).
The only way I can think of to simulate something similar to what you need is to place all views into a separate database, and then run a DROP DATABASE ... CASCADE. I expect this operation to behave atomically, i.e either succeed of fail as a single unit.

Related

Cannot delete from view without exactly one key preserved table

I have to run a script for a company. I just get the same error every time.
The query:
DELETE FROM WMO
WHERE (clientnr = ****** AND number_message = *****)
The error:
ORA-01752: cannot delete from view without exactly one key-preserved
table
What did I wrong?
Thnx!
Database views are in general projection of one or more tables. It is a SELECT statement over one or more tables to be specific. For database engine it is impossible to decide what it should delete and from which table unless the view is constructed from single table.
The best solution is to run DELETE command against tables that are used to construct the view.
Additional information:
ORA-01752: cannot delete from view without exactly one key-preserved table

how to remove Error for schema bindings in redshift

I want to be able to make CTE to make the below SQL work, I am getting the error
ERROR: Cannot replace a normal view with a late binding view for the below SQL, any way I could change it up so that it doesnt bind with schema views?
CREATE OR REPLACE
VIEW "dev"."XXBRK_DAILY_FX_RATES" ("F_C", "CURRENCY", "C_D", "C_R") AS
SELECT DISTINCT GL.GL_R.F_C, GL.GL_R.CURRENCY,
GL.GL_R.DATE, GL.GL_R.C_R
FROM GL.GL_R
with no schema binding
WHERE GL.GL_R.C_T='Corporate'
UNION ALL
SELECT DISTINCT GL.GL_R.F_C, GL.GL_R.F_C CURRENCY, GL.GL_R.DATE, 1
FROM GL.GL_R;
So you seem to have a statement issue. The last 4 lines are after the ';' and not part of the statement being run. I'm guessing that these are extraneous and posted by mistake.
Views on Redshift come in several types - normal and late binding are 2. The view "dev"."XXBRK_DAILY_FX_RATES" seems to already exist in your cluster so your command is trying to replace it, not create it. The error message is correct, you cannot replace a view with a view of a different type. You need to drop the view, then recreate it as late binding.
Now be careful as other objects dependent on this view will be impacted when you drop it (especially if you CASCADE the drop). When you drop and recreate the view it is a new object in the database but replacing a view just make a new definition for the same object. Understand the impacts of drop to your database before you execute it.

Get all constraint errors when inserting data from another table

I have a staging table without any constraints in my Azure SQL database (Azure SQL database 12.0.2000.8). I want to insert the data from the Staging table into the "real" table on which multiple constraints are set. When inserting the data, I use a statement of the kind
INSERT INTO <someTable> SELECT <columns> FROM StagingTable;
Now I only get the first error when violating some constraints. However, for my use case, it is important to get all violations, so they can be resolved altogether.
I have tried using TRY...CATCH mechanisms, however, this will throw an error on the first error and run the catch clause, but it will not continue with the other data. Note that the correct data that has no violations should not be inserted, so the whole insert statement can be rolled back on one error, however, I want to see all violations to be able to correct them all without having to run the insert statement multiple times to get all errors.
EDIT:
The types of constraints that need to be checked are foreign key constraints, NOT NULL constraints, duplicate keys. No casting is done, so no need to check for conversions.
There are couple of options:
If you want to catch row level information, you have to go for cursors or while loop and try to insert each row in TRY CATCH block and see if you are getting any error, and log the same.
Create another table similar to main table(say, MainCheckTable) with all constraints and disable all the constraints and load the data.
Now, you can leverage DBCC CHECKCONSTRAINTS to see all the constraint violations.Read more on this .
USE DBName;
DBCC CHECKCONSTRAINTS(MainCheckTable) WITH ALL_CONSTRAINTS;
First, don't look at your primary table(s). Look at the related tables e.g. lookups etc. Populate these first. Once you have populated the related tables (i.e.) satisfy all related constraints, then add the data.
You need to work backwards from the least constrained tables to the most constrained if that makes sense.
You should check that your related tables have the required reference values/fields that you intend to insert. This is easy to do, since you already have a staging table.

postgresql: \copy method enter valid entries and discard exceptions

When entering the following command:
\copy mmcompany from '<path>/mmcompany.txt' delimiter ',' csv;
I get the following error:
ERROR: duplicate key value violates unique constraint "mmcompany_phonenumber_key"
I understand why it's happening, but how do I execute the command in a way that valid entries will be inserted and ones that create an error will be discarded?
The reason PostgreSQL doesn't do this is related to how it implements constraints and validation. When a constraint fails it causes a transaction abort. The transaction is in an unclean state and cannot be resumed.
It is possible to create a new subtransaction for each row but this is very slow and defeats the purpose of using COPY in the first place, so it isn't supported by PostgreSQL in COPY at this time. You can do it yourself in PL/PgSQL with a BEGIN ... EXCEPTION block inside a LOOP over a select from the data copied into a temporary table. This works fairly well but can be slow.
It's better, if possible, to use SQL to check the constraints before doing any insert that violates them. That way you can just:
CREATE TEMPORARY TABLE stagingtable(...);
\copy stagingtable FROM 'somefile.csv'
INSERT INTO realtable
SELECT * FROM stagingtable
WHERE check_constraints_here;
Do keep concurrency issues in mind though. If you're trying to do a merge/upsert via COPY you must LOCK TABLE realtable; at the start of your transaction or you will still have the potential for errors. It looks like that's what you're trying to do - a copy if not exists. If so, skipping errors is absolutely the wrong approach. See:
How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?
Insert, on duplicate update in PostgreSQL?
Postgresql - Clean way to insert records if they don't exist, update if they do
Can COPY be used with a function?
Postgresql csv importation that skips rows
... this is a much-discussed issue.
One way to handle the constraint violations is to define triggers on the target table to handle the errors. This is not ideal as there can still be race conditions (if concurrently loading), and triggers have pretty high overhead.
Another method: COPY into a staging table and load the data into the target table using SQL with some handling to skip existing entries.
Additionally, another useful method is to use pgloader

Prevent update to non-existent rows

At work we have a table to hold settings which essentially contains the following columns:
PARAMNAME
VALUE
Most of the time new settings are added but on rare occasions, settings are removed. Unfortunately this means that any scripts which might have previously updated this value will continue to do so despite the fact that the update results in "0 rows updated" and leads to unexpected behaviour.
This situation was picked up recently by a regression test failure but only after much investigation into why the data in the system was different.
So my question is: Is there a way to generate an error condition when an update results in zero rows updated?
Here are some options I have thought of, but none of them are really all that desirable:
PL/SQL wrapper which notices the failed update and throws an exception.
Not ideal as it doesn't stop anyone/a script from manually doing an update.
A trigger on the table which throws an exception.
Goes against our current policy of phasing out triggers.
Requires updating trigger every time a setting is removed and maintaining a list of obsolete settings (if doing exclusion).
Might have problems with mutating table (if doing inclusion by querying what settings currently exist).
A PL/SQL wrapper seems like the best option to me. Triggers are a great thing to phase out, with the exception of generating sequences and inserting history records.
If you're concerned about someone manually updating rather than using the PL/SQL wrapper, just restrict the user role so that it does not have UPDATE privileges on the table but has EXECUTE privileges on the procedure.
Not really a solution but a method to organize things a bit:
Create a separate table with the parameter definitions and link to that table from the parameter value table. Make the reference to the parameter definition required (nulls not allowed).
Definition table PARAMS (ID, NAME)
Actual settings table PARAM_VALUES (PARAM_ID, VALUE)
(changing your table structure is also a very effective way to evoke errors in scripts that have not been updated...)
May be you can use MERGE statement
here is a link for it
http://www.oracle-developer.net/display.php?id=203
The merge statement allows you to combine insert and update in the same query, so in case the desired row does not exist you may insert a record in a buffer table to indicate the the row does not exist or else you can update the required record
Hope it helps