I have one table that has a Code and a Type property. I have another table which has a foo_id property. The foo_id is a Code of Type == foo so when creating a constraint between these two tables I need to match Code to foo_id and Type to the constant Foo.
Is there a way to do this? I don't want to add an Type column to my second table that is going to have the same value for every single row because that seems like a waste.
Table 1 Table 2
Code <--------------- Foo_id Foo_id maps to Code
Type <---"Foo" But Table 2 doesn't have a property
that maps to Type. But it should be
constant
I'm creating an association between Table 1 and Table 2 in my .edmx file. When I click on that association and then click on the Referential Constraint, I can see Code and Type for the principle key, but I only have foo_id I can use for the Dependent Property. So I want to specify that the constraint for Type should be a constant.
There are, of course, other values for Type other than Foo, but Table 2 in particular is only concerned with Foo types.
I can work around it by just doing something like:
var x = from i in Table2
select new { someT2Prop = i.Table2Prop,
someT1Prop = (from r in Table1 where r.Code == i.Foo_id
&& r.Type == "Foo" select r.Table1Prop).FirstOrDefault() };
But that's kind of messy. I'd like to just have a navigation property from Table 2 to Table 1, so I could do something like this:
var x = from i in Table2
select new { someT2Prop = i.Table2Prop,
someT1Prop = i.Table1.Table1Prop };
If I understand your question then:
Create a view:
SELECT Code
FROM [Table 1]
WHERE Type = "Foo"
Then constrain on Code values from this view.
Related
I have a polymorphic table, like so:
| id (Uuid, not null) | name (Text, nullable) | other_property (Bool, not null) |
and a check constraint like this:
other_property OR name IS NOT NULL
The idea behind this being that there are two "types" of objects in this table, one with a name and one without - and the reason I'm doing that is because I don't want to duplicate the relationship tables (i.e. "one_to_many_table_with_name", "one_to_many_table_without_name").
When creating a view like this:
CREATE VIEW with_name as
SELECT id, name FROM my_table WHERE other_property = False;
because the root table's name field is nullable, the view's name field is also nullable, even though I know it can't be null due to the check constraint.
is there a way to force postgres to make the column as NOT NULL?
I have merged two tables foo into bar together.
foo would have the rows:
id
name
and bar would have:
id
name
type
Every entry of foo I inserted into bar recieved the value 2 in the type column.
Now I want to create an updatable view for foo, which queries bar to return the inserted entries.
If I insert something into teh view, the type column of bar should always be 2.
I tried something like
CREATE OR REPLACE VIEW v_foo AS
SELECT bar.id, bar.name, 2 AS type
FROM bar
WHERE bar.type = 2;
But this still sets type to null on an insert.
Can anyone help with this?
You can specify an ON INSERT rule for the view lie this:
CREATE OR REPLACE RULE v_foo_insert_rule AS ON INSERT
TO v_foo
DO INSTEAD INSERT INTO bar(id, name, type) VALUES (NEW.id, NEW.name, 2);
I am trying to do a bulk update to a table that has a UNIQUE constraint on the column I'm updating. Suppose the table is defined by:
CREATE TABLE foo (id INTEGER PRIMARY KEY, bar INTEGER UNIQUE);
Suppose the database contains a series of rows with contiguous integer values in the bar column ranging from 1 to 100, and that they've been inserted sequentially.
Suppose I want put a five-wide gap in the "bar" sequence starting at 17, for example with a query such as this:
UPDATE foo SET bar = bar + 5 WHERE bar > 17;
SQLite refuses to execute this update, saying "Error: UNIQUE constraint failed: foo.bar" All right, sure, if the query is executed one row at a time and starts at the first row that meets the WHERE clause, indeed the UNIQUE constraint will be violated: two rows will have a bar column with a value of 23 (the row where bar was 18, and the original row where bar is 23). But if I could somehow force SQLite to run the update bottom-up (start at the highest value for row and work backward), the UNIQUE constraint would not be violated.
SQLite has an optional ORDER BY / LIMIT clause for UPDATE, but that doesn't affect the order in which the UPDATEs occur; as stated at the bottom of this page, "the order in which rows are modified is arbitrary."
Is there some simple way to suggest to SQLite to process row updates in a certain order? Or do I have to use a more convoluted route such as a subquery?
UPDATE: This does not work; the same error appears:
UPDATE foo SET bar = bar + 5 WHERE bar IN
(SELECT bar FROM foo WHERE bar > 17 ORDER BY bar DESC);
If moving the unique constraint out of the table definition into its own independent index is feasible, implementing Ben's idea becomes easy:
CREATE TABLE foo(id INTEGER PRIMARY KEY, bar INTEGER);
CREATE UNIQUE INDEX bar_idx ON foo(bar);
-- Do stuff
DROP INDEX bar_idx;
-- Update bar values
CREATE UNIQUE INDEX bar_idx ON foo(bar); -- Restore the unique index
If not, something like
CREATE TEMP TABLE foo_copy AS SELECT * FROM foo;
-- Update foo_copy's bar values
DELETE FROM foo;
INSERT INTO foo SELECT * FROM foo_copy;
An alternative that doesn't require the table to be changed is to have an intermediate update that sets the new values to be in a range not covered by the range (easy if no values can be negative) that exists and to then update the values to what they should be.
e.g. the following demonstrates this using negative intermediate values :-
-- Load the data
DROP TABLE IF EXISTS foo;
CREATE TABLE foo (id INTEGER PRIMARY KEY, bar INTEGER UNIQUE);
WITH RECURSIVE cte1(x) AS (SELECT 1 UNION ALL SELECT x + 1 FROM cte1 LIMIT 100)
INSERT INTO foo (bar) SELECT * FROM cte1;
-- Show the original data
SELECT * FROM foo;
UPDATE foo SET bar = 0 - (bar + 5) WHERE bar > 17;
UPDATE foo SET bar = 0 - bar WHERE bar < 0;
-- Show the end result
SELECT * FROM foo;
Result 1 - Original Data
Result 2 - Updated data :-
Sometimes, one might want to move some data from one column to another. By moving (in constrast to copying), I mean that the new column was originally null before doing the operation, and the old column should be set to null after doing the operation.
I have a table defined as such:
CREATE TABLE photos(id BIGSERIAL PRIMARY KEY, photo1 BYTEA, photo2 BYTEA);
Suppose there is an entry in the table where photo1 contains some data, and photo2 is NULL. I would like to make an UPDATE query wuch that photo1 becomes NULL and photo2 contains the data that was originally in photo1.
I issue the following SQL command (WHERE clause left out for brevity):
UPDATE photos SET photo2 = photo1, photo1 = NULL;
It seems to work.
I also tried it this way:
UPDATE photos SET photo1 = NULL, photo2 = photo1;
It also seems to work.
But is it guaranteed to work? Specifically, could photo1 be set to NULL before photo2 is set to photo1, thereby causing me to end up with NULL in both columns?
As an aside, this standard UPDATE syntax seems inefficient when my BYTEAs are large, as photo2 has to be copied byte-by-byte from photo1, when a simple swapping of pointers might have sufficed. Maybe there is a more efficient way that I don't know about?
This is definitely safe.
Column-references in the UPDATE refer to the old columns, not the new values. There is in fact no way to reference a computed new value from another column.
See, e.g.
CREATE TABLE x (a integer, b integer);
INSERT INTO x (a,b) VALUES (1,1), (2,2);
UPDATE x SET a = a + 1, b = a + b;
results in
test=> SELECT * FROM x;
a | b
---+---
2 | 2
3 | 4
... and the ordering of assignments is not significant. If you try to multiply-assign a value, you'll get
test=> UPDATE x SET a = a + 1, a = a + 1;
ERROR: multiple assignments to same column "a"
because it makes no sense to assign to the same column multiple times, given that both expressions reference the old tuple values, and order is not significant.
However, to avoid a full table rewrite in this case, I would just ALTER TABLE ... ALTER COLUMN ... RENAME ... then CREATE the new column with the old name.
I have a view like this one:
SELECT NVL(foo, 0) foo FROM bar
Unfortunately, this view looses the fact that bar.foo is NUMBER(1) and instead just assign it as NUMBER. I want to keep the type information, so I did this:
SELECT CAST(NVL(foo, 0) AS NUMBER(1)) foo FROM bar
This works, but if I have a lot of columns like foo I need to duplicate the type information. Would I change the precision of one of them, I would have to change it in the view as well or risk loosing precision. So ideally I would want to do something like this:
SELECT CAST(NVL(foo, 0) AS TYPE(foo)) foo FROM bar
Is that possible, and if so, how?
I don't think this is possible, as you would implicitly change the view definition when changing the base table.
You might want to avoid inserting NULL data at all: If your business logic allows restricting your columns to NOT NULL, you can either implement the NLV(foo,0) logic on client side or use a BEFORE INSERT trigger to convert all NULL inserts to 0:
CREATE TABLE T1(
foo NUMBER(1) NOT NULL
);
CREATE TRIGGER TRG1 BEFORE INSERT ON T1
FOR EACH ROW
BEGIN
:new.foo := NVL(:old.foo, 0);
END;
/
Then you can
INSERT INTO T1 VALUES (NULL);
SELECT * FROM T1;
which will give you 0
See Oracle Documentation