My Postgres database has the following schema where the the user can store multi profile images.
CREATE TABLE users(
id INT GENERATE AS ALWAYS PRIMARY KEY,
name VARCHAR(50)
);
CREATE TABLE images(
id INT GENERATE AS ALWAYS PRIMARY KEY,
url VARCHAR(50)
);
CREATE TABLE user_images(
user_id INT REFERENCES users(id),
image_id INT REFERENCES images(id)
);
How do I ensure that when I insert a user object, I also insert at least one user image?
You cannot do so very easily . . . and I wouldn't encourage you to enforce this. Why? The problem is a "chick and egg" problem. You cannot insert a row into users because there is no image. You cannot insert a row into user_images because there is no user_id.
Although you can handle this situation with transactions or delayed constraint checking, that covers only half the issue -- because you have to prevent deletion of the last image.
Here are two alternative.
First, you can simply add a main_image_id to the users table and insist that it be NOT NULL. Voila! At least one image is required.
Second, you can use a trigger to maintain a count of images in users. Then treat rows with no images as "deleted" so they are never seen.
When you insert a data into a table database can return a id from row which was inserted. So, if id > 0 the row has been inserted. But first, add column id (bigserial, auto increment, unique) to all tables.
INSERT INTO user_images VALUES (...) RETURNING id;
Related
I have two tables that I would like to let them share the same sequence to populate the primary key ID column. However, I also don't want the user to specify or change the value for the ID column.
By using the code below, I can let two tables share the same sequence.
CREATE TABLE T1
(
ID INTEGER DEFAULT SEQ_1.nextval NOT NULL
);
This code will use its own sequence and prevent users from changing or specifying with INSERT:
CREATE TABLE T1
(
ID INTEGER GENERATED ALWAYS AS IDENTITY NOT NULL
);
Is there a way that can both world? Something like this:
CREATE TABLE T1
(
ID INTEGER GENERATED ALWAYS AS ( SEQ_1.nextval ) NOT NULL
);
Regarding the use case, as #Sujitmohanty30 asked, the reason that I raised this question:
I'm thinking to implement inheritance in the database, consider this UML diagram (I can't directly post images due to insufficient reputation, and sorry for being lack of imagination).
ANIMAL is abstract and all inheritance is mandatory. This means no instance of ANIMAL should be created. Furthermore, there is an one-to-many relationship between ANIMAL and ZOO_KEEPER.
Therefore, I came up with this idea:
CREATE SEQUENCE ANIMAL_ID_SEQ;
CREATE TABLE HORSE
(
ID INT DEFAULT ANIMAL_ID_SEQ.nextval NOT NULL PRIMARY KEY,
HEIGHT DECIMAL(3, 2) NOT NULL
);
CREATE TABLE DOLPHIN
(
ID INT DEFAULT ANIMAL_ID_SEQ.nextval NOT NULL PRIMARY KEY,
LENGTH DECIMAL(3, 2) NOT NULL
);
CREATE MATERIALIZED VIEW LOG ON HORSE WITH ROWID;
CREATE MATERIALIZED VIEW LOG ON DOLPHIN WITH ROWID;
CREATE MATERIALIZED VIEW ANIMAL
REFRESH FAST ON COMMIT
AS
SELECT 'horse' AS TYPE, ROWID AS RID, ID -- TYPE column is used as a UNION ALL marker
FROM HORSE
UNION ALL
SELECT 'dolphin' AS TYPE, ROWID AS RID, ID
FROM DOLPHIN;
ALTER TABLE ANIMAL
ADD CONSTRAINT ANIMAL_PK PRIMARY KEY (ID);
CREATE TABLE ZOO_KEEPER
(
NAME VARCHAR(50) NOT NULL PRIMARY KEY,
ANIMAL_ID INT NOT NULL REFERENCES ANIMAL (ID)
);
In this case, the use of the shared sequence is to avoid collision in ANIMAL mview. It uses DEFAULT to get the next ID of the shared sequence. However, using DEFAULT doesn't prevent users from manually INSERTing the ID field or UPDATE the value of it.
You can create a master view/table and generate the sequence in it.
Then copy it as column values into both tables while inserting.
Another option could be inserting into both tables at same time.Use SEQ.NEXTVAL to insert into first table to get a new ID, and then SEQ.CURRVAL to copy same id in the table.
No, you cant have anything like this because ID is independently generated for each of the tables and this can be done only using sequence when you are inserting the data in both the tables at the same time.
You should normalize your data schema: add column animal_type into the table and create composite primary key on both columns
I have the following two tables in my Postgres database:
CREATE TABLE User (
Id serial UNIQUE NOT NULL,
Login varchar(80) UNIQUE NOT NULL,
PRIMARY KEY (Id,Login)
);
CREATE TABLE UserData (
Id serial PRIMARY KEY REFERENCES Users (Id),
Password varchar(255) NOT NULL
);
Say, I add a new user with INSERT INTO Users(Id, Login) VALUES(DEFAULT, 'John') and also want to add VALUES(id, 'john1980') in UserData where id is John's new id.
How do I get that id? Running a query for something just freshly created seems superfluous. I have multiple such situations across the database. Maybe my design is flawed in general?
(I'm obviously not storing passwords like that.)
1) Fix your design
CREATE TABLE usr (
usr_id serial PRIMARY KEY,
,login text UNIQUE NOT NULL
);
CREATE TABLE userdata (
usr_id int PRIMARY KEY REFERENCES usr
,password text NOT NULL
);
Start by reading the manual about identifiers and key words.
user is a reserved word. Never use it as identifier.
Use descriptive identifiers. id is useless.
Avoid mixed case identifiers.
serial is meant for a unique column that can be pk on its own. No need for a multicolumn pk.
The referencing column userdata.usr_id cannot be a serial, too. Use a plain integer.
I am just using text instead of varchar(n), that's optional. More here.
You might consider to merge the two tables into one ...
2) Query to INSERT in both
Key is the RETURNING clause available for INSERT, UPDATE, DELETE, to return values from the current row immediately.
Best use in a data-modifying CTE:
WITH ins1 AS (
INSERT INTO usr(login)
VALUES ('John') -- just omit default columns
RETURNING usr_id -- return automatically generated usr_id
)
INSERT INTO userdata (usr_id, password )
SELECT i.usr_id, 'john1980'
FROM ins1 i;
You can consider using a trigger. The Id column of the newly inserted row can be accessed by the name NEW.Id.
References:
CREATE TRIGGER documentation on PostgreSQL Manual
Trigger Procedures
I have two tables which are connected by m2m relationship.
CREATE TABLE words
(
id INT PRIMARY KEY,
word VARCHAR(100) UNIQUE,
counter INT
)
CREATE TABLE urls
(
id INT PRIMARY KEY,
url VARCHAR(100) UNIQUE
)
CREATE TABLE urls_words
(
url_id INT NOT NULL REFERENCES urls(id),
word_id INT NOT NULL REFERENCES words(id)
)
and i have counter field in words table. How can i automize proccess of updating counter field which is responsible for calculating how much rows stored in urls_words with particular word.
I would investigate why you want to store this value. There may be good reasons, but triggers complicate databases.
If this is a "load-then-query" database, then you can update the count when you load data -- presumably at some frequency such as once a day or once a week. You don't need to worry about triggers.
If this is a transactional database, then triggers would be needed and these add complexity to the processing. They also lock tables when you might not want them locked.
An alternative is to have an index on urls_words(word_id, url_id). This would greatly speed the calculation of the count when you need it. It also does not require triggers or locks on multiple table during an update.
Create a trigger on urls_words table which updates the counter column on words table every time a change is made (ie update, insert, delete).
I have a two tables which insert using jdbc. For example its parcelsTable and filesTableAnd i have some cases:
1. INSERT new row in both tables.
2. INSERT new row only in parcelsTable.
TABLES:
DROP parcelsTable;
CREATE TABLE(
num serial PRIMARY KEY,
parcel_name text,
filestock_id integer
)
DROP filesTable;
CREATE TABLE(
num serial PRIMARY KEY,
file_name text,
files bytea
)
I want to set parcelsTable.filestock_id=filesTable.num when i have INSERT in both tables using TRIGGER.
Its possible? How to know that i insert in both tables?
You don't need to use a trigger to get the foreign key value in this case. Since you have it set as serial you can access the latest value using currval. Run something like this this from your app:
insert into filesTable (file_name, files) select 'f1', 'asdf';
insert into parcelsTable (parcel_name, filestock_id) select 'p1', currval('filesTable_num_seq');
Note that this should only be used when inserting one record at a time to grab individual key values from currval. I'm calling the default sequence name of table_column_seq, which you should be able to use unless you've explicitly declared something different.
I would also recommend explicitly declaring nullability and the relationship:
CREATE TABLE parcelsTable (
...
filestock_id integer NULL REFERENCES filesTable (num)
);
Here is a working demo at SqlFiddle.
This might not be an answer, but it may be what you need. I am making this an answer instead of a comment because I need the space.
I don't know if you can have a trigger on two tables. Typically this is not needed. As in your case, typically either you are creating a parent record and a child record, or you are just creating a child record of an existing record.
So, typically, if you need a trigger when creating both, it is sufficient to put the trigger on the parent record.
I don't think you can do what you need. What you are trying to do is populate the foreign key with the parent record primary key in the same transaction. I don't think you can do that. I think you will have to provide the foreign key in the insert for parcelsTable.
You will end up leaving this NULL when you are creating a record in the parcelsTable at times when you are not creating a record in filesTable. So I think you will want to set the foreign key in the INSERT statement.
Only idea I've got by now is that you can create function that do indirect insert to the tables. then you can have whatever condition you need, with parallel inserts too.
Multiply users can call store procedure(SP), that will make some changes to mytable in SQL Server. This SP should insert some rows to mytable that has reference to itself through parentid column.
TABLE mytable(
id int identity(1,1) primary key,
name varchar(20) not null,
parentId int not null foreign key references mytable(id)
)
in order to insert row to such table, accordingly to other posts, I have 2 ways:
Allow null to parentid column by ALTER TABLE mytable alter column parentid int null;, insert the row, update parentid and than disable null to parentid
Allow IDENTITY by set identity_insert maytable on, insert dummy row with id=-1 and parentid=-1, insert the correct row with reference to -1, update the parentid to SCOPE_IDENTITY() and in the end set IDENTITY to off
The case:
Assume I take the 2nd way. SP managed to set identity_insert mytable on BUT didn't yet finished the execution of the rest SP. At this time, there are other INSERT requests(NOT through SP) to the mytable table like INSERT INTO mytable(name,parentid) VALUES('theateist', -1). No id is specified because they assumed that IDENTITY is off and therefore id is auto-incremental.
The Question:
Will this cause errors while inserting because IDENTITY, in this period of time, is ON and not auto-incremental any more and therefore it will require id specification? If yes, it will be better to use the 1st way, isn't it?
Thank you
identity_insert is a per-connection setting - you won't affect other connections/statements running against this table.
I definitely wouldn't suggest going the first way, if it could be avoided, since it could impact other users of the table - e.g. some other connection could do a broken insert (parentid=null) while the column definition allows it, and then your stored proc will break. Also, setting a column not null forces a full table scan to occur, so this won't work well as the table grows.
If you did stick with method 2, you've still got an issue with what happens if two connections run this stored proc simultaneously - they'll both want to insert the -1 row, at different times, and delete it also. You'll have conflicts.
I'm guessing the problem you're having is inserting the "roots" of the tree(s), since they have no parent, and so you're attempting to have them self referencing. I'd instead probably make the roots have a null parentid permanently. If there's some other key column(s), these could be used in a filtered index or indexed view to ensure that only one root exists for each key.
Imagine that we're building some form of family trees, and ignoring most of the realities of such beasts (such as most families requiring children to have two parents):
CREATE TABLE People (
PersonID int IDENTITY(1,1) not null,
Surname varchar(30) not null,
Forename varchar(30) not null,
ParentID int null,
constraint PK_People PRIMARY KEY (PersonID),
constraint FK_People_Parents FOREIGN KEY (ParentID) references People (PersonID)
)
CREATE UNIQUE INDEX IX_SoleFamilyRoot ON People (Surname) WHERE (ParentID is null)
This ensures that, within each family (as identified by the surname), exactly one person has a null ParentID. Hopefully, you can modify this example to fit your model.
On SQL Server 2005 and earlier, you have to use an indexed view instead.