I have the following query:
UPDATE items SET quantity = quantity - 1
WHERE quantity > 0 AND user_id = $1 AND item_id IN (5, 6, 7);
I'd like to modify it such that the update will only occur if all three rows are updated.
That is, unless that user has items 5, 6, 7 with quantities greater than 0 for each of them, 0 rows will be updated. However, if the condition is true for each, then all three rows are updated.
I'm not sure of a simple way to do this. My gut solution is to use a CTE where the initial query gets the COUNT and then you only perform the update if the count = 3, but I think there must be a better way?
Also, I am using 3 items here as an example. The number of item_ids is variable, and can be anywhere between 1 and 20 in my case (passed from the app server as an array)
Use transaction. Inside the transaction, execute the UPDATE. Check the number of rows updated. If that number is less than the length of the list of IDs, abort the transaction with ROLLBACK, else COMMIT.
Yet another option is to check when the couple of <user_id, item_id> is not present with a quantity equal to 0, using the NOT EXISTS operator.
UPDATE items i
SET quantity = quantity - 1
WHERE user_id = $1
AND item_id IN (5, 6, 7)
AND NOT EXISTS (SELECT user_id, item_id
FROM items
WHERE i.user_id = user_id AND i.item_id = item_id
AND quantity = 0);
Check the demo here.
Added a check constraint to the table quantity >= 0 and then just did this:
UPDATE items SET quantity = quantity - 1
WHERE user_id = $1 AND item_id IN (5, 6, 7);
Related
I want to input a new row in a table with the following design
CREATE TABLE DMZ
(
DDM date NOT NULL,
NDM int NOT NULL,
PR int NOT NULL
CONSTRAINT PK_DMZ PRIMARY KEY(NDM)
);
PR can only be 1, or 2, which I defined as a constraint.(1 if this document is for income, and 2 if this document is a consumption. DM is a document number (actually Id in my case).
ALTER TABLE DMZ
ADD CONSTRAINT PR CHECK (PR IN (1,2));
I filled it with some handwritten data
INSERT INTO DMZ VALUES('2014.01.04', 20, 1);
INSERT INTO DMZ VALUES('2014.01.04', 21, 1);
INSERT INTO DMZ VALUES('2014.01.04', 22, 2);
There are two rows, where PR = 1, and only one where PR = 2. I want to write a script to INSERT a new row like this
INSERT INTO DMZ(DDM, PR) VALUES(GETDATE(), X)
Where X, I want to have something like "count rows where PR = 1 and rows where PR = 2, and if there more rows where PR = 1, use PR = 2 in newly inserted row, and if there are more rows where PR = 2, use PR = 1.
P.S.: That is a recreation of my deleted answer, hope now it's clear. To those who asked, why am I doing such a nonsence - it is a part of a list of tasks I HAVE to perform. I tried to do it, but I don't know how to perform this part with PR.
EDIT: I managed to write what I needed, but I am getting the following error ""Cannot perform an aggregate function on an expression containing an aggregate or a subquery."
INSERT INTO DMZ(ddm, pr)
SELECT COUNT(CASE WHEN (COUNT(CASE WHEN PR = 1 THEN 1 ELSE 0 END)> COUNT(CASE WHEN PR = 2 THEN 1 ELSE 0 END)) THEN 1 ELSE 2 END) AS pr, GETDATE() as ddm
FROM DMZ
Try doing a INSERT SELECT statement with a CASE statement to check your PR counts using SUM and CASE in a subquery:
INSERT INTO DMZ (a.DDM, a.NDM, a.PR)
SELECT GETDATE() AS DOM,
a.NDM AS NDM,
CASE WHEN a.PR_1_Count > a.PR_2_Count
THEN 2
ELSE 1
END AS PR
FROM (SELECT
MAX(NDM) + 1 AS NDM,
SUM(CASE WHEN PR = 1 THEN 1 ELSE 0 END) AS PR_1_Count,
SUM(CASE WHEN PR = 2 THEN 1 ELSE 0 END) AS PR_2_Count
FROM DMZ) a
Fiddle here.
Note: If you want an actual count to be inserted, remove your CONSTRAINT for the PR check and change the CASE statement from THEN 2 to THEN PR_2_Count and THEN 1 to THEN PR_1_Count.
Also, I've hardcoded a NDM column value in my demo because you're column is set to NOT NULL, I assume you'll handle that.
Update: Per your comment below, I've updated the syntax to include MAX(NDM) + 1. I would, however, suggest adding a new NDM IDENTITY column to replace your current NDM column so that it will generate your PK for you vs. generating the value yourself (see the attached Fiddle for an example of this). Read more about IDENTITY columns here and how to do it here.
Identity columns can be used for generating key values. The identity
property on a column guarantees the following:
Each new value is generated based on the current seed & increment.
Each new value for a particular transaction is different from other
concurrent transactions on the table.
The identity property on a column does not guarantee the following:
Uniqueness of the value - Uniqueness must be enforced by using a
PRIMARY KEY or UNIQUE constraint or UNIQUE index.
I'm currently working on creating a Log Table that will have all the data from another table and will also have recorded, as Versions, changes in the prices of items in the main table.
I would like to know how it is possible to save the versions, that is, increment the value +1 at each insertion of the same item in the Log table.
The Log table is loaded via a Merge of data coming from the User API, on a python script using PYODBC:
MERGE LogTable as t
USING (Values(?,?,?,?,?)) AS s(ID, ItemPrice, ItemName)
ON t.ID = s.ID AND t.ItemPrice= s.ItemPrice
WHEN NOT MATCHED BY TARGET
THEN INSERT (ID, ItemPrice, ItemName, Date)
VALUES (s.ID, s.ItemPrice, s.ItemName, GETDATE())
Table example:
Id
ItemPrice
ItemName
Version
Date
1
50
Foo
1
Today
2
30
bar
1
Today
And after inserting the Item with ID = 1 again with a different price, the table should look like this:
Id
ItemPrice
ItemName
Version
Date
1
50
Foo
1
Today
2
30
bar
1
Today
1
45
Foo
2
Today
Saw some similar questions mentioning using triggers but in these other cases it was not a Merge used to insert the data into the Log table.
May the following helps you, modify your insert statement as this:
Insert Into tbl_name
Values (1, 45, 'Foo',
COALESCE((Select MAX(D.Version) From tbl_name D Where D.Id = 1), 0) + 1, GETDATE())
See a demo from db<>fiddle.
Update, according to the proposed enhancements by #GarethD:
First: Using ISNULL instead of COALESCE will be more performant.
Where performance can play an important role is when the result is not a constant, but rather a query of some sort. -1-
Second: prevent race condition that may occur when multiple threads trying to read the MAX value. So the query will be as the following:
Insert Into tbl_name WITH (HOLDLOCK)
Values (1, 45, 'Foo',
ISNULL((Select MAX(D.Version) From tbl_name D Where D.Id = 1), 0) + 1, GETDATE())
UPDATE polls_options SET `votes`=`votes`+1, `percent`=ROUND((`votes`+1) / (SELECT voters FROM polls WHERE poll_id=? LIMIT 1) * 100,1)
WHERE option_id=?
AND poll_id=?
Don't have table data yet, to test it properly. :)
And by the way, in what type % integers should be stored in database?
Thanks for the help!
You don't say what database your using (Postgresql, Mysql, Oracle..etc) but if your using Mysql you could get away with using a TinyInt datatype. Your rounding to an int anyway, and assuming your percentages will always be between 0 and 100 you'll be fine.
Your problem seems to be that you don't have any test data so you are unable to test the syntax of your query. But that is a problem you can easily solve yourself and it doesn't even take that long:
Just make up some data and use that to test.
This isn't as hard as it might sound. For example here I create two polls, the first of which has four votes and the second of which has two votes. I then try to add a vote to option 1 of poll 1 using your query.
CREATE TABLE polls_options (
poll_id INT NOT NULL,
option_id INT NOT NULL,
votes INT NOT NULL,
percent FLOAT NOT NULL
);
INSERT INTO polls_options (poll_id, option_id, votes, percent) VALUES
(1, 1, 1, '25'),
(1, 2, 3, '75'),
(2, 1, 1, '50'),
(2, 2, 1, '50');
CREATE TABLE polls (poll_id INT NOT NULL, voters INT NOT NULL);
INSERT INTO polls (poll_id, voters) VALUES
(1, 4),
(2, 2);
UPDATE polls_options
SET votes = votes + 1,
percent = ROUND((votes + 1) / (SELECT voters FROM polls WHERE poll_id = 1 LIMIT 1) * 100,1)
WHERE option_id = 1
AND poll_id = 1;
SELECT * FROM polls_options;
Here are the results:
poll_id option_id votes percent
1 1 2 75
1 2 3 75
2 1 1 50
2 2 1 50
You can see that there are a number of problems:
The polls table isn't updated yet so the total vote count for poll 1 is wrong (4 instead of 5). Notice that you don't even need this table - it duplicates the same information that can already be found in the polls_options table. Having two keep these two tables in sync is extra work. If you need to adjust the results for some reason, for example to remove some spam voting, you will have to remember to update both tables. It's unnecessary extra work and an extra source of errors.
Even if you have remembered to update the polls table first, the percentage for option 1 is still calculated incorrectly: it is calculated as 3/5 instead of 2/5 because it is effectively doing this calculation: ((votes + 1) + 1).
The percentage for 2 isn't updated causing the total percentage for poll 1 to be greater than 100.
You probably shouldn't even be storing the percentage in the database. Instead of persisting this value consider calculate it on-the-fly only when you need it.
You might want to reconsider your table design to avoid redundant data. Consider normalizing your table structure. If you do this then all the problems I listed above will be solved and your statements will be much simpler.
Good luck!
I have a problem in sql where I need to generate a packing list from a list of transactions.
Data Model
The transactions are stored in a table that contains:
transaction id
item id
item quantity
Each transaction can have multiple items (and coincidentally multiple rows with the same transaction id). Each item then has a quantity from 1 to N.
Business Problem
The business requires that we create a packing list, where each line item in the packing list contains the count of each item in the box.
Each box can only contain 160 items (they all happen to be the same size/weight). Based on the total count of the order we need to split items into different boxes (sometimes splitting even the individual item's collection into two boxes)
So the challenge is to take that data schema and come up with the result set that includes how many of each item belong in each box.
I am currently brute forcing this in some not so pretty ways and wondering if anyone has an elegant/simple solution that I've overlooked.
Example In/Out
We really need to isolate how many of each item end up in each box...for example:
Order 1:
100 of item A100 of item B140 of item C
This should result in three rows in the result set:
Box 1: A (100), B (60) Box 2: B(40), C (120) Box 3: C(20)
Ideally the query would be smart enough to put all of C together, but at this point - we're not too concerned with that.
How about something like
SELECT SUM([Item quantity]) as totalItems
, SUM([Item quantity]) / 160 as totalBoxes
, MOD(SUM([Item Quantity), 160) amountInLastBox
FROM [Transactions]
GROUP BY [Transaction Id]
Let me know what fields in the resultset you're looking for and I could come up with a better one
I was looking for something similar and all I could achieve was expanding the rows to the number of item counts in a transaction, and grouping them into bins. Not very elegant though.. Moreover, because string aggregation is still very cumbersome in SQL Server (Oracle, i miss you!), I have to leave the last part out. I mean putting the counts in one single row..
My solution is as follows:
Example transactions table:
INSERT INTO transactions
(trans_id, item, cnt) VALUES
('1','A','50'),
('2','A','140'),
('3','B','100'),
('4','C','80');
GO
Create a dummy sequence table, which contains numbers from 1 to 1000 (I assume that maximum number allowed for an item in a single transaction is 1000):
CREATE TABLE numseq (n INT NOT NULL IDENTITY) ;
GO
INSERT numseq DEFAULT VALUES ;
WHILE SCOPE_IDENTITY() < 1000 INSERT numseq DEFAULT VALUES ;
GO
Now we can generate a temporary table from transactions table, in which each transaction and item exist "cnt" times in a subquery, and then give numbers to the bins using division, and group by bin number:
SELECT bin_nr, item, count(*) count_in_bin
INTO result
FROM (
SELECT t.item, ((row_number() over (order by t.item, s.n) - 1) / 160) + 1 as bin_nr
FROM transactions t
INNER JOIN numseq s
ON t.cnt >= s.n -- join conditionally to repeat transaction rows "cnt" times
) a
GROUP BY bin_id, item
ORDER BY bin_id, item
GO
Result is:
bin_id item count_in_bin
1 A 160
2 A 30
2 B 100
2 C 30
3 C 50
In Oracle, the last step would be as simple as that:
SELECT bin_id, WM_CONCAT(CONCAT(item,'(',count_in_bin,')')) contents
FROM result
GROUP BY bin_id
This isn't the prettiest answer but I am using a similar method to keep track of stock items through an order process, and it is easy to understand, and may lead to you developing a better method than I have.
I would create a table called "PackedItem" or something similar. The columns would be:
packed_item_id (int) - Primary Key, Identity column
trans_id (int)
item_id (int)
box_number (int)
Each record in this table represents 1 physical unit you will ship.
Lets say someone adds a line to transaction 4 with 20 of item 12, I would add 20 records to the PackedItem table, all with the transaction ID, the Item ID, and a NULL box number. If a line is updated, you need to add or remove records from the PackedItem table so that there is always a 1:1 correlation.
When the time comes to ship, you can simply
SELECT TOP 160 FROM PackedItem WHERE trans_id = 4 AND box_number IS NULL
and set the box_number on those records to the next available box number, until no records remain where the box_number is NULL. This is possible using one fairly complicated UPDATE statement inside a WHILE loop - which I don't have the time to construct fully.
You can now easily get your desired packing list by querying this table as follows:
SELECT box_number, item_id, COUNT(*) AS Qty
FROM PackedItem
WHERE trans_id = 4
GROUP BY box_number, item_id
Advantages - easy to understand, fairly easy to implement.
Pitfalls - if the table gets out of sync with the lines on the Transaction, the final result can be wrong; This table will get many records in it and will be extra work for the server. Will need each ID field to be indexed to keep performance good.
Below is my (simplified) schema (in MySQL ver. 5.0.51b) and my strategy for updating it. There has got to be a better way. Inserting a new item requires 4 trips to the database and editing/updating an item takes up to 7!
items: itemId, itemName
categories: catId, catName
map: mapId*, itemId, catId
* mapId (varchar) is concat of itemId + | + catId
1) If inserting: insert item. Get itemId via MySQL API.
Else updating: just update the item table. We already have the itemId.
2) Conditionally batch insert into categories.
INSERT IGNORE INTO categories (catName)
VALUES ('each'), ('category'), ('name');
3) Select IDs from categories.
SELECT catId FROM categories
WHERE catName = 'each' OR catName = 'category' OR catName = 'name';
4) Conditionally batch insert into map.
INSERT IGNORE INTO map (mapId, itemId, catId)
VALUES ('1|1', 1, 1), ('1|2', 1, 2), ('1|3', 1, 3);
If inserting: we're done. Else updating: continue.
5) It's possible that we no longer associate a category with this item that we did prior to the update. Delete old categories for this itemId.
DELETE FROM MAP WHERE itemId = 2
AND catID <> 2 AND catID <> 3 AND catID <> 5;
6) If we have disassociated ourselves from a category, it's possible that we left it orphaned. We do not want categories with no items. Therefore, if affected rows > 0, kill orphaned categories. I haven't found a way to combine these in MySQL, so this is #6 & #7.
SELECT categories.catId
FROM categories
LEFT JOIN map USING (catId)
GROUP BY categories.catId
HAVING COUNT(map.catId) < 1;
7) Delete IDs found in step 6.
DELETE FROM categories
WHERE catId = 9
AND catId = 10;
Please tell me there's a better way that I'm not seeing.
Also, if you are worried about trips to the db, make steps into a stored procedure. Then you have one trip.
There are a number of things you can do to make a bit easier:
Read about [INSERT...ON DUPLICATE KEY UPDATE][1]
Delete old categories before you insert new categories. This may benefit from an index better.
DELETE FROM map WHERE itemId=2;
You probably don't need map.mapID. Instead, declare a compound primary key over (itemID, catID).
As Peter says in his answer, use MySQL's multi-table delete:
DELETE categories.* FROM categories LEFT JOIN map USING (catId)
WHERE map.catID IS NULL
http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html
Steps 6 & 7 can be combined easily enough:
DELETE categories.*
FROM categories
LEFT JOIN map USING (catId)
WHERE map.catID IS NULL;
Steps 3 & 4 can also be combined:
INSERT IGNORE INTO map (mapId, itemId, catId)
SELECT CONCAT('1|', c.catId), 1, c.catID
FROM categories AS c
WHERE c.catName IN('each','category','name');
Otherwise, your solution is pretty standard, unless you want to use triggers to maintain the map table.