I have merged two tables foo into bar together.
foo would have the rows:
id
name
and bar would have:
id
name
type
Every entry of foo I inserted into bar recieved the value 2 in the type column.
Now I want to create an updatable view for foo, which queries bar to return the inserted entries.
If I insert something into teh view, the type column of bar should always be 2.
I tried something like
CREATE OR REPLACE VIEW v_foo AS
SELECT bar.id, bar.name, 2 AS type
FROM bar
WHERE bar.type = 2;
But this still sets type to null on an insert.
Can anyone help with this?
You can specify an ON INSERT rule for the view lie this:
CREATE OR REPLACE RULE v_foo_insert_rule AS ON INSERT
TO v_foo
DO INSTEAD INSERT INTO bar(id, name, type) VALUES (NEW.id, NEW.name, 2);
Related
How can I create a view where the view is simply
name
exist_check
foo
false
The exist_check column is dependent on if a select query in another table returns anything or not.
For example
SELECT *
FROM foo_table
WHERE name = 'foo'
If that returns anything, exist_check in the view should be true. If it returns nothing, exist_check should be false.
I am trying to do a bulk update to a table that has a UNIQUE constraint on the column I'm updating. Suppose the table is defined by:
CREATE TABLE foo (id INTEGER PRIMARY KEY, bar INTEGER UNIQUE);
Suppose the database contains a series of rows with contiguous integer values in the bar column ranging from 1 to 100, and that they've been inserted sequentially.
Suppose I want put a five-wide gap in the "bar" sequence starting at 17, for example with a query such as this:
UPDATE foo SET bar = bar + 5 WHERE bar > 17;
SQLite refuses to execute this update, saying "Error: UNIQUE constraint failed: foo.bar" All right, sure, if the query is executed one row at a time and starts at the first row that meets the WHERE clause, indeed the UNIQUE constraint will be violated: two rows will have a bar column with a value of 23 (the row where bar was 18, and the original row where bar is 23). But if I could somehow force SQLite to run the update bottom-up (start at the highest value for row and work backward), the UNIQUE constraint would not be violated.
SQLite has an optional ORDER BY / LIMIT clause for UPDATE, but that doesn't affect the order in which the UPDATEs occur; as stated at the bottom of this page, "the order in which rows are modified is arbitrary."
Is there some simple way to suggest to SQLite to process row updates in a certain order? Or do I have to use a more convoluted route such as a subquery?
UPDATE: This does not work; the same error appears:
UPDATE foo SET bar = bar + 5 WHERE bar IN
(SELECT bar FROM foo WHERE bar > 17 ORDER BY bar DESC);
If moving the unique constraint out of the table definition into its own independent index is feasible, implementing Ben's idea becomes easy:
CREATE TABLE foo(id INTEGER PRIMARY KEY, bar INTEGER);
CREATE UNIQUE INDEX bar_idx ON foo(bar);
-- Do stuff
DROP INDEX bar_idx;
-- Update bar values
CREATE UNIQUE INDEX bar_idx ON foo(bar); -- Restore the unique index
If not, something like
CREATE TEMP TABLE foo_copy AS SELECT * FROM foo;
-- Update foo_copy's bar values
DELETE FROM foo;
INSERT INTO foo SELECT * FROM foo_copy;
An alternative that doesn't require the table to be changed is to have an intermediate update that sets the new values to be in a range not covered by the range (easy if no values can be negative) that exists and to then update the values to what they should be.
e.g. the following demonstrates this using negative intermediate values :-
-- Load the data
DROP TABLE IF EXISTS foo;
CREATE TABLE foo (id INTEGER PRIMARY KEY, bar INTEGER UNIQUE);
WITH RECURSIVE cte1(x) AS (SELECT 1 UNION ALL SELECT x + 1 FROM cte1 LIMIT 100)
INSERT INTO foo (bar) SELECT * FROM cte1;
-- Show the original data
SELECT * FROM foo;
UPDATE foo SET bar = 0 - (bar + 5) WHERE bar > 17;
UPDATE foo SET bar = 0 - bar WHERE bar < 0;
-- Show the end result
SELECT * FROM foo;
Result 1 - Original Data
Result 2 - Updated data :-
I created view myview with two columns ID and Name. But I want add extra column for this.
I using the query as :
ALTER VIEW myview ADD COLUMNS (AGE int);
But I am getting error as:
required (...)+ loop did not match anything at input 'columns' in add
partition statement.
Any help me in this?
You will have to get the new column from the table from which the view was created.
alter view myview as select col_1 ,col_2 ,Age from your_table
I have a view like this one:
SELECT NVL(foo, 0) foo FROM bar
Unfortunately, this view looses the fact that bar.foo is NUMBER(1) and instead just assign it as NUMBER. I want to keep the type information, so I did this:
SELECT CAST(NVL(foo, 0) AS NUMBER(1)) foo FROM bar
This works, but if I have a lot of columns like foo I need to duplicate the type information. Would I change the precision of one of them, I would have to change it in the view as well or risk loosing precision. So ideally I would want to do something like this:
SELECT CAST(NVL(foo, 0) AS TYPE(foo)) foo FROM bar
Is that possible, and if so, how?
I don't think this is possible, as you would implicitly change the view definition when changing the base table.
You might want to avoid inserting NULL data at all: If your business logic allows restricting your columns to NOT NULL, you can either implement the NLV(foo,0) logic on client side or use a BEFORE INSERT trigger to convert all NULL inserts to 0:
CREATE TABLE T1(
foo NUMBER(1) NOT NULL
);
CREATE TRIGGER TRG1 BEFORE INSERT ON T1
FOR EACH ROW
BEGIN
:new.foo := NVL(:old.foo, 0);
END;
/
Then you can
INSERT INTO T1 VALUES (NULL);
SELECT * FROM T1;
which will give you 0
See Oracle Documentation
I got a table 'foo' that looks like
ID | NAME
------+----------------------------
123 | PiratesAreCool
254 | NinjasAreCoolerThanPirates
and a second table 'bar'
SID | ID | created | dropped
------+------+------------+-----------
9871 | 123 | 03.24.2009 | 03.26.2009
9872 | 123 | 04.02.2009 |
bar.ID is a reference (foreign key) to foo.ID.
Now I want to prevent that you can insert a new record to 'bar' when there is a record with the same ID and bar.dropped is null on that record.
So, when the 'bar' looks like above
INSERT INTO BAR VALUES ('9873','123','07.24.2009',NULL);
should be forbidden, but
INSERT INTO BAR VALUES ('9873','254','07.24.2009',NULL);
should be allowed (because there is no 'open' bar-record for 'NinjasAreCoolerThanPirates').
How do i do that?
I hope my problem is clear and somebody can help me.
hmm, that should be enough to just create a unique index.
create unique index ix_open_bar on bar (id, dropped);
of course, that would also have the effect that you can not drop a bar twice per day (unless the dropped is a timestamp which would minimize the risk)
Actually, I noticed that Postgres have support for partial indexes:
create unique index ix_open_bar on bar (id) where dropped is null;
Update:
After some tests, the unique constraint is not enforced on null values, but the partial indexes will still work.
And if you don't want to use the partial indexes, this might work as well:
create unique index ix_open_bar on bar(id, coalesce(dropped, 'NULL'));
However, when using coalesce, you need to have the same datatypes on them (so if dropped is a timestamp, you need to change 'NULL' to a timestamp value instead).
This will only insert a record if there isn't an 'open' record in bar for your id
INSERT INTO bar
SELECT '9873','254','07.24.2009',NULL
WHERE NOT EXISTS(SELECT 1 FROM bar WHERE ID='254' AND dropped IS NULL)
Set up a trigger on the table bar on insert that checks to see if the current row's ID is present in the table already and reject it if so.
I don't know the specific postgres syntax, but it should work something like this:
CREATE TRIGGER trigger_name BEFORE INSERT ON bar
IF EXISTS (
SELECT 1
FROM bar
WHERE bar.ID = inserted.ID
AND bar.dropped IS NULL
)
BEGIN
// raise an error or reject or whatever Postgres calls it.
END
And then whenever you try to insert into bar, this trigger will check if something already exists and reject it if so. If bar.dropped isn't null, it'll allow the insert just fine.
If someone knows the right syntax for this, please feel free to edit my answer.
You can create a partial index with a WHERE clause. For your purposes this might do;
CREATE UNIQUE INDEX my_check on bar(id) where dropped is null;
Assuming id 124 does NOT exists in the table, this will be allowed , but only ONE record can have dropped=NULL for a given ID:
INSERT INTO BAR VALUES ('9873','124','07.24.2009',NULL);
And this will be allowed wether or not 124 already exists
INSERT INTO BAR VALUES ('9873','124','07.24.2009','07.24.2009');
If 125 already exists, this will not be allowd
INSERT INTO BAR VALUES ('9873','125','07.24.2009',NULL);
But this will
INSERT INTO BAR VALUES ('9873','125','07.24.2009','07.24.2009');