I want to change the orders state to shipped if the order_no is inside the today shipped table using sql in query. (i using ms accsess)
You can use a where clause. For instance:
update orders
set State = 'shipped'
where order_no in (select ts.order_no from todayshipped as ts);
You might want to add and State <> 'shipped' to avoid updating already shipped rows.
For performance reasons, I often suggest exists instead:
update orders
set State = 'shipped'
where exists (select 1
from todayshipped as ts
where ts.order_no = orders.order_no
);
This can easily take advantage of an index on todayshipped(order_no).
Related
Apologize for the terrible title. Unsure how to express my need succinctly. I know there is an answer for this with MySQL, but it does not work for Oracle
I have a table where my OrderID field is not distinct as it by line item on the order. When a user deletes certain line items from the order and approves the order, I get a completion date for the line item that was approved, and a null value for those deleted. I need to update the completiondate for all of the lines on that order. This is what it looks like
And this is what I am looking for
I have found code that will update if I pick a specific OrderID, but if I open it up to scan the entire table, it will take 35 hrs (not feasible, obviously)
update batchmgr.udt_buyer a set a.completedate=(select b.completedate from batchmgr.udt_buyer b where b.completedate is not null and b.orderid=a.orderid) where a.orderid ='221292540';
You can use:
update batchmgr.udt_buyer b
set completedate = (select b2.completedate
from batchmgr.udt_buyer b2
where b2.completedate is not null and
b2.orderid = b.orderid and
rownum = 1
)
where b.orderid = '221292540' and b.completedate is null;
For performance, you want an index on (orderid, completate).
Also, be sure that orderid is a string. If not drop the single quotes on the constant coparison.
I would use max() just in case if you have different completion dates for the same order:
update batchmgr.udt_buyer a
set a.completedate=(select max(b.completedate)
from batchmgr.udt_buyer b
where b.completedate is not null
and b.orderid=a.orderid)
where a.orderid ='221292540';
Or you can create a view with analytic function max() over to get the same without updates:
create or replace view v_order_items as
select
OrderId
,Buyer
,OrderType
,Item
,CreateDate
,max(CompleteDate)over(partition by OrderId) as CompleteDate
from batchmgr.udt_buyer;
But it would be better to normalize your data and store orders separately from order items.
I didn't find a working solution for creating a "lookup column" in a Firebird database.
Here is an example:
Table1: Orders
[OrderID] [CustomerID] [CustomerName]
Table2: Customers
[ID] [Name]
When I run SELECT * FROM ORDERS I want to get OrderID, CustomerID and CustomerName....but CustomerName should automatically be computed by looking for the "CustomerID" in the "ID" column of "Customer" Table, returning the content of the "Name" column.
Firebird has calculated fields (generated always as/computed by), and these allow selecting from other tables (contrary to an earlier version of this answer, which stated that Firebird doesn't support this).
However, I suggest you use a view instead, as I think it performs better (haven't verified this, so I suggest you test this if performance is important).
Use a view
The common way would be to define a base table and an accompanying view that gathers the necessary data at query time. Instead of using the base table, people would query from the view.
create view order_with_customer
as
select orders.id, orders.customer_id, customer.name
from orders
inner join customer on customer.id = orders.customer_id;
Or you could just skip the view and use above join in your own queries.
Alternative: calculated fields
I label this as an alternative and not the main solution, as I think using a view would be the preferable solution.
To use calculated fields, you can use the following syntax (note the double parentheses around the query):
create table orders (
id integer generated by default as identity primary key,
customer_id integer not null references customer(id),
customer_name generated always as ((select name from customer where id = customer_id))
)
Updates to the customer table will be automatically reflected in the orders table.
As far as I'm aware, the performance of this option is less than when using a join (as used in the view example), but you might want to test that for yourself.
FB3+ with function
With Firebird 3, you can also create calculated fields using a trigger, this makes the expression itself shorter.
To do this, create a function that selects from the customer table:
create function lookup_customer_name(customer_id integer)
returns varchar(50)
as
begin
return (select name from customer where id = :customer_id);
end
And then create the table as:
create table orders (
id integer generated by default as identity primary key,
customer_id integer not null references customer(id),
customer_name generated always as (lookup_customer_name(customer_id))
);
Updates to the customer table will be automatically reflected in the orders table. This solution can be relatively slow when selecting a lot of records, as the function will be executed for each row individually, which is a lot less efficient than performing a join.
Alternative: use a trigger
However if you want to update the table at insert (or update) time with information from another table, you could use a trigger.
I'll be using Firebird 3 for my answer, but it should translate - with some minor differences - to earlier versions as well.
So assuming a table customer:
create table customer (
id integer generated by default as identity primary key,
name varchar(50) not null
);
with sample data:
insert into customer(name) values ('name1');
insert into customer(name) values ('name2');
And a table orders:
create table orders (
id integer generated by default as identity primary key,
customer_id integer not null references customer(id),
customer_name varchar(50) not null
)
You then define a trigger:
create trigger orders_bi_bu
active before insert or update
on orders
as
begin
new.customer_name = (select name from customer where id = new.customer_id);
end
Now when we use:
insert into orders(customer_id) values (1);
the result is:
id customer_id customer_name
1 1 name1
Update:
update orders set customer_id = 2 where id = 1;
Result:
id customer_id customer_name
1 2 name2
The downside of a trigger is that updating the name in the customer table will not automatically be reflected in the orders table. You would need to keep track of these dependencies yourself, and create an after update trigger on customer that updates the dependent records, which can lead to update/lock conflicts.
No need here a complex lookup field.
No need to add a persistant Field [CustomerName] on Table1.
As Gordon said, a simple Join is enough :
Select T1.OrderID, T2.ID, T2.Name
From Customers T2
Join Orders T1 On T1.IDOrder = T2.ID
That said, if you want to use lookup Fields (as we do it on a Dataset) with SQL you can use some thing like :
Select T1.OrderID, T2.ID,
( Select T3.YourLookupField From T3 where (T3.ID = T2.ID) )
From Customers T2 Join Orders T1 On T1.IDOrder = T2.ID
Regards.
In my access 2016 database, I have a table tblCustomerInfo with a field customer_code. There are some old customer_code values that need to be updated to newer values (for example, all rows with customer_code = 103 should be updated to customer_code = 122).
I can achieve something like this going one customer_code at a time using queries like:
UPDATE tblCustomerInfo set customer_code = 122 Where customer_code = 103;
UPDATE tblCustomerInfo set customer_code = 433 Where customer_code = 106;
...
however, I would like to avoid having to run a separate query for each customer_code. Is there any way to update all the codes, each to a different new value, in a single query?
Create a table eg CustomerCodes with two fields oldcode and newcode, and add in all the values. Then run an update query like this:
UPDATE CustomerCodes INNER JOIN tblCustomerInfo
ON CustomerCodes.OldCode = tblCustomerInfo.Customer_Code
SET Customers.Customer_Code = [CustomerCodes].[NewCode];
Alternative
If there aren't too many to change you can use a switch statement like this:
UPDATE tblCustomerInfo
SET Customer_Code =
SWITCH(Customer_Code=103,126,
Customer_Code = 106,130,
Customer_Code = 107,133);
There is, in my experience, a limit on the number of pairs you can have in a switch statement, although I have never bothered to find out exactly what the limit is - it doesn't seem to be documented.
I have one table named: ORDERS
this table contains OrderNumber's which belong to the same person and same address lines for that person.
However sometimes the data is inconsistent;
as example looking at the table screenshot: Orders table with bad data to fix -
you all can noticed that orderNumber 1 has a name associated to and addresses line1-2-3-4. sometimes those are all different by some character or even null.
my goal is to update all those 3 lines with one set of data that is already there and set equally all the 3 rows.
to make more clear the result expected should be like this:
enter image description here
i am currently using a MERGE statement to avoid a CURSOR (for loop )
but i am having problems to make it work
here the SQL
MERGE INTO ORDERS O USING
(SELECT
INNER.ORDERNUMBER,
INNER.NAME,
INNER.LINE1,
INNER.LINE2,
INNER.LINE3,
INNER.LINE4
FROM ORDERS INNER
) TEMP
ON( O.ORDERNUMBER = TEMP.ORDERNUMBER )
WHEN MATCHED THEN
UPDATE
SET
O.NAME = TEMP.NAME,
O.LINE1 = TEMP.LINE1,
O.LINE2 = TEMP.LINE2,
O.LINE3 = TEMP.LINE3,
O.LINE4 = TEMP.LINE4;
the biggest issues i am facing is to pick a single row out of the 3 randomly - it does not matter whihc of the data - row i pick to update the line/s
as long i make the records exaclty the same for an order number.
i also used ROWNUM =1 but it in multip[le updates will only output one row and update maybe thousand of lines with the same address and name whihch belong to an order number.
order number is the join column to use ...
kind regards
A simple correlated subquery in an update statement should work:
update orders t1
set (t1.name, t1.line1, t1.line2, t1.line3, t1.line4) =
(select t2.name, t2.line1, t2.line2, t2.line3, t2.line4
from orders t2
where t2.OrderNumber = t1.OrderNumber
and rownum < 2)
for a typical products & shipping Database I am exploring the best way to run a trigger that:
When an order line is set to 'Complete', a trigger is ran that:
Looks for any other order lines for that order.
If all other order lines for that order are also 'Complete'
Update the order header table to complete.
For clatiry: The order header table would store the overall oder total, and the orderLines table stores each product of the order.
SO far, the trigger is written as such:
CREATE OR REPLACE TRIGGER orderComplete
after update ON orderline
for each row
WHEN (new.orderline_fulfilled = 'Y')
DECLARE count NUMBER := 5;
ordersNotDone NUMBER;
BEGIN
SELECT COUNT(Orderline_fulfilled) INTO ordersNotDone
FROM orderHeader
JOIN orderline ON
orderHeader.Order_id = orderLine.Orderline_order
WHERE Order_id = :old.orderline_order
AND orderline_fulfilled = 'Y';
IF ordersNotDone = 0
THEN
UPDATE orderHeader
SET completed = SYSDATE
WHERE orderId = :old.orderline_order;
ENDIF;
END;
This above causes the mutation error, when updating the orderline row.
Enforcing integrity with a trigger is inherently problematic because the RDBMS read consistency mode allows multiple changes simultaneously that cannot see each others' result.
A better solution might be to avoid denormalising the data, and rely on detecting the presence of an incomplete order line to identify incomplete orders. As this would be the minority of cases it can be optimised with a function-based index along the lines of:
create index my_index on orderline(case orderline_complete when 'NO' then orderid else null end)
This will index only the values of orderline where orderline_complete is 'NO', so if there are only 100 such rows in the table then the index will only contain 100 entries.
Identifying incomplete orders is then a matter only of a full or fast full index scan of a very compact index with a query:
select distinct
case orderline_complete when 'NO' then orderid else null end orderid
from
orderline
where
case orderline_complete when 'NO' then orderid else null end is not null;
If you are using 11g then check about compound triggers, example: http://www.toadworld.com/KNOWLEDGE/KnowledgeXpertforOracle/tabid/648/TopicID/TRGC1/Default.aspx
The simplest answer is to use a slightly different type of trigger which is triggered not after a row is updated but after the table is updated. This will not suffer from this problem.
So do something like:
CREATE or REPLACE TRIGGER trigger_name
AFTER INSERT ON orderline --note no for each row
BEGIN
--loop over all orders which contain no unfulfilled orders
FOR lrec IN (SELECT order_id FROM orderline group by order_id where not exists (select 1 from orderline where orderline_fulfilled = 'Y'))
LOOP
-- do stuff to order id because this are complete
END LOOP;
END;
So here we may have completed multiple orders on the insert so the trigger needs to cope with this. Sorry I do not have an oracle instance to play with at home. Hope this helps