I have a two tables which insert using jdbc. For example its parcelsTable and filesTableAnd i have some cases:
1. INSERT new row in both tables.
2. INSERT new row only in parcelsTable.
TABLES:
DROP parcelsTable;
CREATE TABLE(
num serial PRIMARY KEY,
parcel_name text,
filestock_id integer
)
DROP filesTable;
CREATE TABLE(
num serial PRIMARY KEY,
file_name text,
files bytea
)
I want to set parcelsTable.filestock_id=filesTable.num when i have INSERT in both tables using TRIGGER.
Its possible? How to know that i insert in both tables?
You don't need to use a trigger to get the foreign key value in this case. Since you have it set as serial you can access the latest value using currval. Run something like this this from your app:
insert into filesTable (file_name, files) select 'f1', 'asdf';
insert into parcelsTable (parcel_name, filestock_id) select 'p1', currval('filesTable_num_seq');
Note that this should only be used when inserting one record at a time to grab individual key values from currval. I'm calling the default sequence name of table_column_seq, which you should be able to use unless you've explicitly declared something different.
I would also recommend explicitly declaring nullability and the relationship:
CREATE TABLE parcelsTable (
...
filestock_id integer NULL REFERENCES filesTable (num)
);
Here is a working demo at SqlFiddle.
This might not be an answer, but it may be what you need. I am making this an answer instead of a comment because I need the space.
I don't know if you can have a trigger on two tables. Typically this is not needed. As in your case, typically either you are creating a parent record and a child record, or you are just creating a child record of an existing record.
So, typically, if you need a trigger when creating both, it is sufficient to put the trigger on the parent record.
I don't think you can do what you need. What you are trying to do is populate the foreign key with the parent record primary key in the same transaction. I don't think you can do that. I think you will have to provide the foreign key in the insert for parcelsTable.
You will end up leaving this NULL when you are creating a record in the parcelsTable at times when you are not creating a record in filesTable. So I think you will want to set the foreign key in the INSERT statement.
Only idea I've got by now is that you can create function that do indirect insert to the tables. then you can have whatever condition you need, with parallel inserts too.
Related
My Postgres database has the following schema where the the user can store multi profile images.
CREATE TABLE users(
id INT GENERATE AS ALWAYS PRIMARY KEY,
name VARCHAR(50)
);
CREATE TABLE images(
id INT GENERATE AS ALWAYS PRIMARY KEY,
url VARCHAR(50)
);
CREATE TABLE user_images(
user_id INT REFERENCES users(id),
image_id INT REFERENCES images(id)
);
How do I ensure that when I insert a user object, I also insert at least one user image?
You cannot do so very easily . . . and I wouldn't encourage you to enforce this. Why? The problem is a "chick and egg" problem. You cannot insert a row into users because there is no image. You cannot insert a row into user_images because there is no user_id.
Although you can handle this situation with transactions or delayed constraint checking, that covers only half the issue -- because you have to prevent deletion of the last image.
Here are two alternative.
First, you can simply add a main_image_id to the users table and insist that it be NOT NULL. Voila! At least one image is required.
Second, you can use a trigger to maintain a count of images in users. Then treat rows with no images as "deleted" so they are never seen.
When you insert a data into a table database can return a id from row which was inserted. So, if id > 0 the row has been inserted. But first, add column id (bigserial, auto increment, unique) to all tables.
INSERT INTO user_images VALUES (...) RETURNING id;
I have a table with column position, which has unique and not null constraint.
I have move up/down the selected table item requirement,
for that I am taking the selected index and swapping the indexes.
And saving those two items as in DB.
whenever I am trying to insert first item itself its giving UNIQUE constraint..
Because the item's index is already there in DB.
There is a possibility that I can take temporary index, swapping... and saving .. I think it works.
But is there any other way to achieve this requirement
If you do the update in one Update statement, it'll work fine.
create table t (id number primary key);
insert into t values (1);
insert into t values (2);
commit;
update t set id = case when id = 1 then 2 else 1 end
where id in (1,2);
The easiest way would be to use a temporary value like you say because the constraint will not let you have two rows with the same value at any time.
You can probably derive a temporary value that is in itself unique by basing it on the original value and looking at what kind of data you cannot normally have. For example, negative numbers might work.
Other than that, you could declare the constraint as deferred. Then it won't be enforced until the end of your transaction. But that is probably a bit too much effort/impact.
If the field in question is really only used for sorting (and not for object identity), you could consider dropping the uniqueness altogether. You can use a unique primary key as a tie-breaker if necessary.
In Oracle SQL what is the best way to create primary key values for an entity? I have been adding 100 for each different entity and incrementing new entities by 1, but I can see how this is not good because if I have over 100 inserts into a table I would reuse a primary key number. I have many tables with primary keys, how do I determine a way to make sure all of the values are unique and there is no chance of them overlapping with other primary key values?
An example of what I have been doing is as follows:
create table example (
foo_id number(5);
Constraint example_foo_id_pk Primary key (foo_id);
Insert Into example
Values(2000);
Insert Into example
Values(2010);
create table example2 (
foobar_id number(5);
Constraint example2_foobar_id_pk Primary key (foobar_id);
Insert Into example2
Values (2100);
Insert Into example2
Values (2110);
In Oracle people commonly use sequences to generate numbers. In an insert trigger, the next value of the sequence is queried and put in the primary key field. So you normally don't pass a value for that field yourself.
Something like this:
CREATE SEQUENCE seq_example;
CREATE OR REPLACE TRIGGER tib_example
BEFORE INSERT ON example
FOR EACH ROW
BEGIN
SELECT seq_example .NEXTVAL
INTO :new.foo_id
FROM dual;
END;
/
Then you can just insert a record without passing any value for the id, only for the other fields.
If you want the keys to be unique over multiple tables, you can use the same sequence for each of them, but usually this is not necessary at all. A foo and a bar can have the same numeric id if they are different entities.
If you want every entity to have a unique ID throughout your database, you might consider using GUIDs.
Try using a sequence..
CREATE SEQUENCE Seq_Foo
MINVALUE 1
MAXVALUE 99999999
START WITH 1
INCREMENT BY 1;
To use the sequence in an insert, use Seq_Foo.NextVal.
Starting with Oracle database 12C, you can use identity columns. Use something like
foobar_id number(5) GENERATED BY DEFAULT ON NULL AS IDENTITY
For older versions sequences are the recommended way, although some ORM tools offer using a table which stores the counter. Inserting via sequence can be done either with triggers or by directly inserting sequence.nnextval into your table. The latter may be useful if you need the generated ID for other purposes (like inserting into child tables).
I would like to add a constraint which prevents adding a value to a column if the value exists in the primary key column of another table. Is this possible?
EDIT:
Table: MasterParts
MasterPartNumber (Primary Key)
Description
....
Table: AlternateParts
MasterPartNumber (Composite Primary Key, Foreign Key to MasterParts.MasterPartNumber)
AlternatePartNumber (Composite Primary Key)
Problem - Alternate part numbers for each master part number must not themselves exist in the master parts table.
EDIT 2:
Here is an example:
MasterParts
MasterPartNumber Decription MinLevel MaxLevel ReOderLevel
010-00820-50 Garmin GTN™ 750 1 5 2
AlternateParts
MasterPartNumber AlternatePartNumber
010-00820-50 0100082050
010-00820-50 GTN750
only way I could think of solving this would be writing a checking function(not sure what language you are working with), or trying to play around with table relationships to ensure that it's unique
Why not have a single "part" table with an "is master part" flag and then have an "alternate parts" table that maps a "master" part to one or more "alternate" parts?
Here's one way to do it without procedural code. I've deliberately left out ON UPDATE CASCADE and ON DELETE CASCADE, but in production I'd might use both. (But I'd severely limit who's allowed to update and delete part numbers.)
-- New tables
create table part_numbers (
pn varchar(50) primary key,
pn_type char(1) not null check (pn_type in ('m', 'a')),
unique (pn, pn_type)
);
create table part_numbers_master (
pn varchar(50) primary key,
pn_type char(1) not null default 'm' check (pn_type = 'm'),
description varchar(100) not null,
foreign key (pn, pn_type) references part_numbers (pn, pn_type)
);
create table part_numbers_alternate (
pn varchar(50) primary key,
pn_type char(1) not null default 'a' check (pn_type = 'a'),
foreign key (pn, pn_type) references part_numbers (pn, pn_type)
);
-- Now, your tables.
create table masterparts (
master_part_number varchar(50) primary key references part_numbers_master,
min_level integer not null default 0 check (min_level >= 0),
max_level integer not null default 0 check (max_level >= min_level),
reorder_level integer not null default 0
check ((reorder_level < max_level) and (reorder_level >= min_level))
);
create table alternateparts (
master_part_number varchar(50) not null references part_numbers_master (pn),
alternate_part_number varchar(50) not null references part_numbers_alternate (pn),
primary key (master_part_number, alternate_part_number)
);
-- Some test data
insert into part_numbers values
('010-00820-50', 'm'),
('0100082050', 'a'),
('GTN750', 'a');
insert into part_numbers_master values
('010-00820-50', 'm', 'Garmin GTN™ 750');
insert into part_numbers_alternate (pn) values
('0100082050'),
('GTN750');
insert into masterparts values
('010-00820-50', 1, 5, 2);
insert into alternateparts values
('010-00820-50', '0100082050'),
('010-00820-50', 'GTN750');
In practice, I'd build updatable views for master parts and for alternate parts, and I'd limit client access to the views. The updatable views would be responsible for managing inserts, updates, and deletes. (Depending on your company's policies, you might use stored procedures instead of updatable views.)
Your design is perfect.
But SQL isn't very helpful when you try to implement such a design. There is no declarative way in SQL to enforce your business rule. You'll have to write two triggers, one for inserts into masterparts, checking the new masterpart identifier doesn't yet exist as an alias, and the other one for inserts of aliases checking that the new alias identifier doesn't yet identiy a masterpart.
Or you can do this in the application, which is worse than triggers, from the data integrity point of view.
(If you want to read up on how to enforce constraints of arbitrary complexity within an SQL engine, best coverage I have seen of the topic is in the book "Applied Mathematics for Database Professionals")
Apart that it sounds like a possibly poor design,
You in essence want values spanning two columns in different tables, to be unique.
In order to utilize DBs native capability to check for uniqueness, you can create a 3rd, helper column, which will contain a copy of all the values inside the wanted two columns. And that column will have uniqueness constraint. So for each new value added to one of your target columns, you need to add the same value to the helper column. In order for this to be an inner DB constraint, you can add this by a trigger.
And again, needing to do the above, sounds like an evidence for a poor design.
--
Edit:
Regarding your edit:
You say " Alternate part numbers for each master part number must not themselves exist in the master parts table."
This itself is a design decision, which you don't explain.
I don't know enough about the domain of your problem, but:
If you think of master and alternate parts, as totally different things, there is no reason why you may want "Alternate part numbers for each master part number must not themselves exist in the master parts table". Otherwise, you have a common notion of "parts" be it master or alternate. This means they need to be in the same table, and column.
If the second is true, you need something like this:
table "parts"
columns:
id - pk
is_master - boolean (assuming a part can not be master and alternate at the same time)
description - text
This tables role is to list and describe the parts.
Then you have several ways to denote which part is alternate to which. It depends on whether a part can be alternate to more than one part. And it sounds that anyway one master part can have several alternates.
You can do it in the same table, or create another one.
If same: add column: alternate_to, which will be null for master parts, and will have a foreign key into the id column of the same table.
Otherwise create a table, say "alternatives" with: master_id, alternate_id both referencing with a foreign key to the parts table.
(The first above assumes that a part cannot be alternate to more than one other part. If this is not true, the second will work anyway)
It's far from the ideal situation, but I need to fix a database by appending the number "1" to the PK Identiy column which has FK relations to four other tables. I'm basically making a four digit number a five digit number. I need to maintain the relations. I could store the number in a var, do a Set query and append the 1, and do that for each table...
Is there a better way of doing this?
You say you are using an identity data type for your primary key so before you update the numbers you will have to SET IDENTITY_INSERT ON (documentation here) and then turn it off again after the update.
As long as you have cascading updates set for your relations the other tables should be updated automatically.
EDIT: As it's not possible to change an identity value I guess you have to export the data, set the new identity values (+10000) and then import your data again.
Anyone have a better suggestion...
Consider adding another field to the PK instead of extending the length of the PK field. Your new field will have to cascade to the related tables, like a field length increase would, but you get to retain your original PK values.
My suggestion is:
Stop writing to the tables.
Copy the tables to new tables with the new PK.
Rename the old tables to backup names.
Rename the new tables to the original table name.
Count the rows in all the tables and double check your work.
Continue using the tables.
Changing a PK after the fact is not fun.
If the column in question has an identity property on it, it gets complicated. This is more-or-less how I'd do it:
Back up your database.
Put it in single user mode. You don't need anybody mucking around whilst you do the surgery.
Execute the ALTER TABLE statements necessary to
disable the primary key constraint on the table in question
disable all triggers on the table in question
disable all foreign key constraints referencing the table in question.
Clone your table, giving it a new name and a column-for-column identical definitions. Don't bother with any triggers, indices, foreign keys or other constraints. Omit the identity property from the table's definition.
Create a new 'map' table that will map your old id values to the new value:
create table dbo.pk_map
(
old_id int not null primary key clustered ,
new_id int not null unique nonclustered ,
)
Populate the map table:
insert dbo.pk_map
select old_id = old.id ,
new_id = f( old.id ) // f(x) is the desired transform
from dbo.tableInQuestion old
Populate your new table, giving the primary key column the new value:
insert dbo.tableInQuestion_NEW
select id = map.id ,
...
from dbo.tableInQuestion old
join dbo.pk_map map on map.old_id = old.id
Truncate the original table: TRUNCATE dbo.tableInQuestion. This should work—safely—since you've disabled all the triggers and foreign key constraints.
Execute SET IDENTITY_INSERT dbo.tableInQuestion ON.
Reload the original table:
insert dbo.tableInQuestion
select *
from dbo.tableInQuestion_NEW
Execute SET IDENTITY_INSERT dbo.tableInQuestion OFF
Execute drop table dbo.tableInQuestion_NEW. We're all done with it.
Execute DBCC CHECKIDENT( dbo.tableInQuestion , reseed ) to get the identity counter back in sync with the data in the table.
Now, use the map table to propagate the changed primary key column down the line. Depending on your E-R model, this can get complicated as foreign keys referencing the updated column may themselves be part of a composite primary key.
When you're all done, start re-enabling the constraints and triggers you disabled. Make sure you do this using the WITH CHECK option. Fix any problems thus uncovered.
Finally, drop the map table, and clear the single user flag and bring your system(s) back online.
Piece of cake! (or something.)
Consider this approach:
Reset the identity seed to the 10000 + the current seed.
Set identity insert on
Insert into the table from the values in the table and add 10000 to the identity column on the way.
EX:
Set identity insert on
Insert Table(identity, column1, eolumn2)
select identity + 10000, column1, column2
From Table
Where identity < origional max identity value
After the insert you know the identity is exactly 10000 more than the origional.
Update the foreign keys by addding 10000.