Increment a column value of an entity in an SQL table - sql

I've set up the following table:
CREATE TABLE transactions (txn_id SERIAL, stock VARCHAR NOT NULL, qty INT NOT NULL, user_id INT NOT NULL);
On inserting few rows, the table looks like this:
txn_id | stock | qty | user_id
--------+--------+-----+---------
1 | APPL | 2 | 1
2 | GOOGLE | 3 | 4
3 | TSLA | 1 | 2
Now, while adding a new insert into the table,
for example - INSERT INTO transactions (stock, qty, user_id) VALUES ('APPL', 3, 1)
if 'stock' and 'user_id' match (as in the above example), i only want the quantity to be updated in the database. Such as in this case, the row 1 entity should have its 'qty' column value incremented by +3.
Is there a solution to this without using any ORM like SQLalchemy etc. ?

You want on conflict. First set up a unique constraint on the stock/user_id columns:
alter table transactions add constraint unq_transactions_stock_user_id unique (stock, user_id);
Then use this with an on conflict statement:
insert into transactions (stock, qty, user_id)
values ('APPL', 3, 1)
on conflict on constraint unq_transactions_stock_user_id do update
set qty = coalesce(transactions.qty, 0) + excluded.qty;

Related

Error "duplicate key value violates unique constraint" while updating multiple rows

I created a table in PostgreSQL and Oracle as
CREATE TABLE temp(
seqnr smallint NOT NULL,
defn_id int not null,
attr_id int not null,
input CHAR(50) NOT NULL,
CONSTRAINT pk_id PRIMARY KEY (defn_id, attr_id, seqnr)
);
This temp table has primary key as (defn_id,attr_id,seqnr) as a whole!
Then I inserted the record in the temp table as
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (1,100,100,'test1');
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (2,100,100,'test2');
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (3,100,100,'test3');
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (4,100,100,'test4');
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (5,100,100,'test5');
in both Oracle and Postgres!
The table now contains:
seqnr | defn_id | attr_id | input
1 | 100 | 100 | test1
2 | 100 | 100 | test2
3 | 100 | 100 | test3
4 | 100 | 100 | test4
5 | 100 | 100 | test5
When I run the command:
UPDATE temp SET seqnr=seqnr+1
WHERE defn_id = 100 AND attr_id = 100 AND seqnr >= 1;
In case of ORACLE it is Updating 5 Rows and the O/p is
seqnr | defn_id | attr_id | input
2 | 100 | 100 | test1
3 | 100 | 100 | test2
4 | 100 | 100 | test3
5 | 100 | 100 | test4
6 | 100 | 100 | test5
But in case of PostgreSQL it is giving an error!
DETAIL: Key (defn_id, attr_id, seqnr)=(100, 100, 2) already exists.
Why does this happen and how can I replicate the same result in Postgres as Oracle?
Or how can the same result be achieved in Postgres without any errors?
UNIQUE an PRIMARY KEY constraints are checked immediately (for each row) unless they are defined DEFERRABLE - which is the solution you demand.
ALTER TABLE temp
DROP CONSTRAINT pk_id
, ADD CONSTRAINT pk_id PRIMARY KEY (defn_id, attr_id, seqnr) DEFERRABLE
;
Then your UPDATE just works.
db<>fiddle here
This comes at a cost, though. The manual:
Note that deferrable constraints cannot be used as conflict
arbitrators in an INSERT statement that includes an ON CONFLICT DO UPDATE clause.
And for FOREIGN KEY constraints:
The referenced columns must be the columns of a non-deferrable unique
or primary key constraint in the referenced table.
And:
When a UNIQUE or PRIMARY KEY constraint is not deferrable,
PostgreSQL checks for uniqueness immediately whenever a row is
inserted or modified. The SQL standard says that uniqueness should be
enforced only at the end of the statement; this makes a difference
when, for example, a single command updates multiple key values. To
obtain standard-compliant behavior, declare the constraint as
DEFERRABLE but not deferred (i.e., INITIALLY IMMEDIATE). Be aware
that this can be significantly slower than immediate uniqueness
checking.
See:
Constraint defined DEFERRABLE INITIALLY IMMEDIATE is still DEFERRED?
I would avoid a DEFERRABLE PK if at all possible. Maybe you can work around the demonstrated problem? This usually works:
UPDATE temp t
SET seqnr = t.seqnr + 1
FROM (
SELECT defn_id, attr_id, seqnr
FROM temp
WHERE defn_id = 100 AND attr_id = 100 AND seqnr >= 1
ORDER BY defn_id, attr_id, seqnr DESC
) o
WHERE (t.defn_id, t.attr_id, t.seqnr)
= (o.defn_id, o.attr_id, o.seqnr);
db<>fiddle here
But there are no guarantees as ORDER BY is not specified for UPDATE in Postgres.
Related:
UPDATE with ORDER BY

PostgreSQL add new not null column and fill with ids from insert statement

I´ve got 2 tables.
CREATE TABLE content (
id bigserial NOT NULL,
name text
);
CREATE TABLE data (
id bigserial NOT NULL,
...
);
The tables are already filled with a lot of data.
Now I want to add a new column content_id (NOT NULL) to the data table.
It should be a foreign key to the content table.
Is it possible to automatically create an entry in the content table to set a content_id in the data table.
For example
**content**
| id | name |
| 1 | abc |
| 2 | cde |
data
| id |... |
| 1 |... |
| 2 |... |
| 3 |... |
Now I need an update statement that creates 3 (in this example) content entries and add the ids to the data table to get this result:
content
| id | name |
| 1 | abc |
| 2 | cde |
| 3 | ... |
| 4 | ... |
| 5 | ... |
data
| id |... | content_id |
| 1 |... | 3 |
| 2 |... | 4 |
| 3 |... | 5 |
demo:db<>fiddle
According to the answers presented here: How can I add a column that doesn't allow nulls in a Postgresql database?, there are several ways of adding a new NOT NULL column and fill this directly.
Basicly there are 3 steps. Choose the best fitting (with or without transaction, setting a default value first and remove after, leave the NOT NULL contraint first and add afterwards, ...)
Step 1: Adding new column (without NOT NULL constraint, because the values of the new column values are not available at this point)
ALTER TABLE data ADD COLUMN content_id integer;
Step 2: Inserting the data into both tables in a row:
WITH inserted AS ( -- 1
INSERT INTO content
SELECT
generate_series(
(SELECT MAX(id) + 1 FROM content),
(SELECT MAX(id) FROM content) + (SELECT COUNT(*) FROM data)
),
'dummy text'
RETURNING id
), matched AS ( -- 2
SELECT
d.id AS data_id,
i.id AS content_id
FROM (
SELECT
id,
row_number() OVER ()
FROM data
) d
JOIN (
SELECT
id,
row_number() OVER ()
FROM inserted
) i ON i.row_number = d.row_number
) -- 3
UPDATE data d
SET content_id = s.content_id
FROM (
SELECT * FROM matched
) s
WHERE d.id = s.data_id;
Executing several statements one after another by using the results of the previous one can be achieved using WITH clauses (CTEs):
Insert data into content table: This generates an integer series starting at the MAX() + 1 value of the current content's id values and has as many records as the data table. Afterwards the new ids are returned
Now we need to match the current records of the data table with the new ids. So for both sides, we use row_number() window function to generate a consecutive row count for each records. Because both, the insert result and the actual data table have the same number of records, this can be used as join criterion. So we can match the id column of the data table with the new content's id values
This matched data can used in the final update of the new content_id column
Step 3: Add the NOT NULL constraint
ALTER TABLE data ALTER COLUMN content_id SET NOT NULL;

How to add foreign key constraint to Table A (id, type) referencing either of two tables Table B (id, type) or Table C (id, type)?

I'm looking to use two columns in Table A as foreign keys for either one of two tables: Table B or Table C. Using columns table_a.item_id and table_a.item_type_id, I want to force any new rows to either have a matching item_id and item_type_id in Table B or Table C.
Example:
Table A: Inventory
+---------+--------------+-------+
| item_id | item_type_id | count |
+---------+--------------+-------+
| 2 | 1 | 32 |
| 3 | 1 | 24 |
| 1 | 2 | 10 |
+---------+--------------+-------+
Table B: Recipes
+----+--------------+-------------------+-------------+----------------------+
| id | item_type_id | name | consistency | gram_to_fluid_ounces |
+----+--------------+-------------------+-------------+----------------------+
| 1 | 1 | Delicious Juice | thin | .0048472 |
| 2 | 1 | Ok Tasting Juice | thin | .0057263 |
| 3 | 1 | Protein Smoothie | heavy | .0049847 |
+----+--------------+-------------------+-------------+----------------------+
Table C: Products
+----+--------------+----------+--------+----------+----------+
| id | item_type_id | name | price | in_stock | is_taxed |
+----+--------------+----------+--------+----------+----------+
| 1 | 2 | Purse | $200 | TRUE | TRUE |
| 2 | 2 | Notebook | $14.99 | TRUE | TRUE |
| 3 | 2 | Computer | $1,099 | FALSE | TRUE |
+----+--------------+----------+--------+----------+----------+
Other Table: Item_Types
+----+-----------+
| id | type_name |
+----+-----------+
| 1 | recipes |
| 2 | products |
+----+-----------+
I want to be able to have an inventory table where employees can enter inventory counts regardless of whether an item is a recipe or a product. I don't want to have to have a product_inventory and recipe_inventory table as there are many operations I need to do across all inventory items regardless of item types.
One solution would be to create a reference table like so:
Table CD: Items
+---------+--------------+------------+-----------+
| item_id | item_type_id | product_id | recipe_id |
+---------+--------------+------------+-----------+
| 2 | 1 | NULL | 2 |
| 3 | 1 | NULL | 3 |
| 1 | 2 | 1 | NULL |
+---------+--------------+------------+-----------+
It just seems very cumbersome, plus I'd now need to add/remove products/recipes from this new table whenever they are added/removed from their respective tables. (Is there an automatic way to achieve this?)
CREATE TABLE [dbo].[inventory] (
[id] [bigint] IDENTITY(1,1) NOT NULL,
[item_id] [smallint] NOT NULL,
[item_type_id] [tinyint] NOT NULL,
[count] [float] NOT NULL,
CONSTRAINT [PK_inventory_id] PRIMARY KEY CLUSTERED ([id] ASC)
) ON [PRIMARY]
What I would really like to do is something like this...
ALTER TABLE [inventory]
ADD CONSTRAINT [FK_inventory_sources] FOREIGN KEY ([item_id],[item_type_id])
REFERENCES {[products] ([id],[item_type_id]) OR [recipes] ([id],[item_type_id])}
Maybe there is no solution as I'm describing it, so if you have any ideas where I can maintain the same/similar schema, I'm definitely open to hearing them!
Thanks :)
Since your products and recipes are stored separately, and appear to mostly have separate columns, then separate inventory tables is probably the correct approach. e.g.
CREATE TABLE dbo.ProductInventory
(
Product_id INT NOT NULL,
[count] INT NOT NULL,
CONSTRAINT FK_ProductInventory__Product_id FOREIGN KEY (Product_id)
REFERENCES dbo.Product (Product_id)
);
CREATE TABLE dbo.RecipeInventory
(
Recipe_id INT NOT NULL,
[count] INT NOT NULL,
CONSTRAINT FK_RecipeInventory__Recipe_id FOREIGN KEY (Recipe_id)
REFERENCES dbo.Recipe (Recipe_id )
);
If you need all types combined, you can simply use a view:
CREATE VIEW dbo.Inventory
AS
SELECT Product_id AS item_id,
2 AS item_type_id,
[Count]
FROM ProductInventory
UNION ALL
SELECT recipe_id AS item_id,
1 AS item_type_id
[Count]
FROM RecipeInventory;
GO
IF you create a new item_type, then you need to amend the DB design anyway to create a new table, so you would just need to amend the view at the same time
Another possibility, would be to have a single Items table, and then have Products/Recipes reference this. So you start with your items table, each of which has a unique ID:
CREATE TABLE dbo.Items
(
item_id INT IDENTITY(1, 1) NOT NULL
Item_type_id INT NOT NULL,
CONSTRAINT PK_Items__ItemID PRIMARY KEY (item_id),
CONSTRAINT FK_Items__Item_Type_ID FOREIGN KEY (Item_Type_ID) REFERENCES Item_Type (Item_Type_ID),
CONSTRAINT UQ_Items__ItemID_ItemTypeID UNIQUE (Item_ID, Item_type_id)
);
Note the unique key added on (item_id, item_type_id), this is important for referential integrity later on.
Then each of your sub tables has a 1:1 relationship with this, so your product table would become:
CREATE TABLE dbo.Products
(
item_id BIGINT NOT NULL,
Item_type_id AS 2,
name VARCHAR(50) NOT NULL,
Price DECIMAL(10, 4) NOT NULL,
InStock BIT NOT NULL,
CONSTRAINT PK_Products__ItemID PRIMARY KEY (item_id),
CONSTRAINT FK_Products__Item_Type_ID FOREIGN KEY (Item_Type_ID)
REFERENCES Item_Type (Item_Type_ID),
CONSTRAINT FK_Products__ItemID_ItemTypeID FOREIGN KEY (item_id, Item_Type_ID)
REFERENCES dbo.Item (item_id, item_type_id)
);
A few things to note:
item_id is again the primary key, ensuring the 1:1 relationship.
the computed column item_type_id (as 2) ensuring all item_type_id's are set to 2. This is key as it allows a foreign key constraint to be added
the foreign key on (item_id, item_type_id) back to the items table. This ensures that you can only insert a record to the product table, if the original record in the items table has an item_type_id of 2.
A third option would be a single table for recipes and products and make any columns not required for both nullable. This answer on types of inheritance is well worth a read.
I think there is a flaw in your database design. The best way to solve your actual problem, is to have Recipies and products as one single table. Right now you have a redundant column in each table called item_type_id. That column is not worth anything, unless you actually have the items in the same table. I say redundant, because it has the same value for absolutely every entry in each table.
You have two options. If you can not change the database design, work without foreign keys, and make the logic layer select from the correct tables.
Or, if you can change the database design, make products and recipies exist in the same table. You already have a item_type table, which can identify item categorization, so it makes sense to put all items in the same table
you can only add one constraint for a column or pair of columns. Think about apples and oranges. A column cannot refer to both oranges and apples. It must be either orange or apple.
As a side note, this can be somehow achieved with PERSISTED COMPUTED columns, however It only introduces overhead and complexity.
Check This for Reference
You can add some computed columns to the Inventory table:
ALTER TABLE Inventory
ADD _recipe_item_id AS CASE WHEN item_type_id = 1 THEN item_id END persisted
ALTER TABLE Inventory
ADD _product_item_id AS CASE WHEN item_type_id = 2 THEN item_id END persisted
You can then add two separate foreign keys to the two tables, using those two columns instead of item_id. I'm assuming the item_type_id column in those two tables is already computed/constraint appropriately but if not you may want to consider that too.
Because these computed columns are NULL when the wrong type is selected, and because SQL Server doesn't check FK constraints if at least one column value is NULL, they can both exist and only one or the other will be satisfied at any time.

Inserting a row at the specific place in SQLite database

I was creating the database in SQLite Manager & by mistake I forgot to mention a row.
Now, I want to add a row in the middle manually & below it the rest of the Auto-increment keys should be increased by automatically by 1 . I hope my problem is clear.
Thanks.
You shouldn't care about key values, just append your row at the end.
If you really need to do so, you could probably just update the keys with something like this. If you want to insert the new row at key 87
Make room for the key
update mytable
set key = key + 1
where key >= 87
Insert your row
insert into mytable ...
And finally update the key for the new row
update mytable
set key = 87
where key = NEW_ROW_KEY
I would just update IDs, incrementing them, then insert record setting ID manually:
CREATE TABLE cats (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR
);
INSERT INTO cats (name) VALUES ('John');
INSERT INTO cats (name) VALUES ('Mark');
SELECT * FROM cats;
| 1 | John |
| 2 | Mark |
UPDATE cats SET ID = ID + 1 WHERE ID >= 2; -- "2" is the ID of forgotten record.
SELECT * FROM cats;
| 1 | John |
| 3 | Mark |
INSERT INTO cats (id, name) VALUES (2, 'SlowCat'); -- "2" is the ID of forgotten record.
SELECT * FROM cats;
| 1 | John |
| 2 | SlowCat |
| 3 | Mark |
Next record, inserted using AUTOINCREMENT functionality, will have next-to-last ID (4 in our case).

Write SQL script to insert data

In a database that contains many tables, I need to write a SQL script to insert data if it is not exist.
Table currency
| id | Code | lastupdate | rate |
+--------+---------+------------+-----------+
| 1 | USD | 05-11-2012 | 2 |
| 2 | EUR | 05-11-2012 | 3 |
Table client
| id | name | createdate | currencyId|
+--------+---------+------------+-----------+
| 4 | tony | 11-24-2010 | 1 |
| 5 | john | 09-14-2010 | 2 |
Table: account
| id | number | createdate | clientId |
+--------+---------+------------+-----------+
| 7 | 1234 | 12-24-2010 | 4 |
| 8 | 5648 | 12-14-2010 | 5 |
I need to insert to:
currency (id=3, Code=JPY, lastupdate=today, rate=4)
client (id=6, name=Joe, createdate=today, currencyId=Currency with Code 'USD')
account (id=9, number=0910, createdate=today, clientId=Client with name 'Joe')
Problem:
script must check if row exists or not before inserting new data
script must allow us to add a foreign key to the new row where this foreign related to a row already found in database (as currencyId in client table)
script must allow us to add the current datetime to the column in the insert statement (such as createdate in client table)
script must allow us to add a foreign key to the new row where this foreign related to a row inserted in the same script (such as clientId in account table)
Note: I tried the following SQL statement but it solved only the first problem
INSERT INTO Client (id, name, createdate, currencyId)
SELECT 6, 'Joe', '05-11-2012', 1
WHERE not exists (SELECT * FROM Client where id=6);
this query runs without any error but as you can see I wrote createdate and currencyid manually, I need to take currency id from a select statement with where clause (I tried to substitute 1 by select statement but query failed).
This is an example about what I need, in my database, I need this script to insert more than 30 rows in more than 10 tables.
any help
You wrote
I tried to substitute 1 by select statement but query failed
But I wonder why did it fail? What did you try? This should work:
INSERT INTO Client (id, name, createdate, currencyId)
SELECT
6,
'Joe',
current_date,
(select c.id from currency as c where c.code = 'USD') as currencyId
WHERE not exists (SELECT * FROM Client where id=6);
It looks like you can work out if the data exists.
Here is a quick bit of code written in SQL Server / Sybase that I think answers you basic questions:
create table currency(
id numeric(16,0) identity primary key,
code varchar(3) not null,
lastupdated datetime not null,
rate smallint
);
create table client(
id numeric(16,0) identity primary key,
createddate datetime not null,
currencyid numeric(16,0) foreign key references currency(id)
);
insert into currency (code, lastupdated, rate)
values('EUR',GETDATE(),3)
--inserts the date and last allocated identity into client
insert into client(createddate, currencyid)
values(GETDATE(), ##IDENTITY)
go