MS Access: Update + Append Simultaneously? - sql

I have two tables of interest - one is my "main" data table which needs to be updated periodically (ARIES_AC_PRODUCT) and the other is a side table that I pull in new data from (Update_AC_Product). My goal is for Access to update the records if they already exist in the main table, otherwise just append the new records (all with one query). Currently I have two queries for completing this goal... One is an update query and the other is an append query. Note that both tables have the exact same structure (5 fields which are PROPNUM, P_DATE, OIL, GAS, WATER). Below is the SQL code for each query:
Append query:
INSERT INTO ARIES_AC_PRODUCT (PROPNUM, P_DATE, OIL, GAS, WATER)
SELECT Update_AC_Product.PROPNUM, Update_AC_Product.P_DATE, Update_AC_Product.OIL, Update_AC_Product.GAS, Update_AC_Product.WATER
FROM Update_AC_Product;
Update query:
UPDATE ARIES_AC_PRODUCT
INNER JOIN Update_AC_Product ON (ARIES_AC_PRODUCT.PROPNUM = Update_AC_Product.PROPNUM) AND (ARIES_AC_PRODUCT.P_DATE = Update_AC_Product.P_DATE)
SET ARIES_AC_PRODUCT.PROPNUM = [Update_AC_Product]![PROPNUM], ARIES_AC_PRODUCT.P_DATE = [Update_AC_Product]![P_DATE], ARIES_AC_PRODUCT.OIL = [Update_AC_Product]!OIL, ARIES_AC_PRODUCT.GAS = [Update_AC_Product]!GAS, ARIES_AC_PRODUCT.WATER = [Update_AC_Product]![WATER];
Note that I have 2 primary keys in each table - PROPNUM and P_DATE. PROPNUM is the item ID and P_DATE is the production date. One PROPNUM can have multiple P_DATE entries (so the combination of PROPNUM & P_DATE makes a unique record in each table).
Is it possible to combine these two queries into one so I don't have to append new records and then update the existing ones separately?
Thank you!!

Related

Find a single row and update it with nested queries

Good evening everyone, I'm trying to do an update on a Table but I can't really make it work
The feature needed is:
-Watch a field on a form, it contains the number of people that need to sit at the restaurant table.
-Find the first free table that has enough seats, set it as busy and assign a random waiter
Any idea?
more db infos:
Table "Waiters" is composed by ID(Autonumber),Name(Short Text). Has 2 names atm
Table "Tables" is composed by ID(Autonumber),Seats(Number),Busy(y/n),Waiter(short text). All tables have a fixed number of seats and have no Waiter + not busy
SOLUTION:
In the end i used "First" for the assignment and it works perfectly as it follows:
UPDATE Tables SET Tables.Waiter = DLookUp("FirstName","TopWtr")
WHERE ID IN (SELECT FIRST (ID)
FROM Tables
WHERE Seats >= Val(Forms!Room!Text12) AND Waiter Is Null);
Top wasn't working because it was returning multiple records - every table with same number of seats - and couldn't make it work with DISTINCT. This works probably because the table is already ordered by seats
Thanks to June7 for the input
Cannot SET a field value to result of a SELECT subquery - SELECT returns a dataset not a single value. Can return a single value with domain aggregate function.
Build a query object named TopWtr:
SELECT Top 1 ID FROM Waiters ORDER BY Rnd(ID);
Then use DLookup to pull that value. The Busy field seems redundant because if table has a waiter assigned that would indicate busy.
UPDATE Tables SET Tables.Waiter = DLookUp("ID","TopWtr"), Tables.Busy = True
WHERE ID IN (SELECT TOP 1 ID FROM Tables
WHERE Seats >= Val(Forms!Room!Testo17) AND Waiter Is Null
ORDER BY Seats)
An INNER JOIN may be preferable to WHERE clause:
UPDATE Tables INNER JOIN (SELECT TOP 1 ID FROM Tables
WHERE Seats >= Val(Forms!Room!Testo17) AND Waiter Is Null
ORDER BY Seats) AS T1
ON Tables.ID = T1.ID
SET Tables.Waiter = DLookUp("ID","TopWtr"), Tables.Busy = True

Append Query Doesn't Append Missing Items

I have 2 tables. Table 1 has data from the bank account. Table 2 aggregates data from multiple other tables; to keep things simple, we will just have 2 tables. I need to append the data from table 1 into table 2.
I have a field in table2, "SrceFk". The concept is that when a record from Table1 appends, it will fill the table2.SrceFk with the table1 primary key and the table name. So record 302 will look like "BANK/302" after it appends. This way, when I run the append query, I can avoid duplicates.
The query is not working. I deleted the record from table2, but when I run the query, it just says "0 records appended". Even though the foreign key is not present.
I am new to SQL, Access, and programming in general. I understand basic concepts. I have googled this issue and looked on stackOverflow, but no luck.
This is my full statement:
INSERT INTO Main ( SrceFK, InvoDate, Descrip, AMT, Ac1, Ac2 )
SELECT Bank.ID &"/"& "BANK", Bank.TransDate, Bank.Descrip, Bank.TtlAmt, Bank.Ac1, Bank.Ac2
FROM Bank
WHERE NOT EXISTS
(
SELECT * FROM Main
WHERE Main.SrceFK = Bank.ID &"/"& "BANK"
);
I expect the query to add records that aren't present in the table, as needed.

Dynamically Updating Columns with new Data

I am handling an SQL table with over 10K+ Values, essentially it controls updating the status of a production station over the day. Currently the SQL server will report a new message at the current time stamp - ergo a new entry can be generated for the same part hundreds of times a day whilst only having the column "Production_Status" and "TimeStamp" changed. I want to create a new table that selects unique part names then have two other columns that control bringing up the LATEST entry for THAT part.
I have currently selected the data - reordered it so the latest timestamp is first on the list. I am currently trying to do this dynamic table but I am new to sql.
select dateTimeStamp,partNumber,lineStatus
from tblPLCData
where lineStatus like '_ Zone %' or lineStatus = 'Production'
order by dateTimeStamp desc;
The Expected results should be a NewTable with the row count being based off how many parts are in our total production facility - this column will be static - then two other columns that will check Originaltable for the latest status and timestamp and update the two other columns in the newTable.
I don't need help with the table creation but more the logic that surrounds the updating of rows based off of another table.
Much Appreciated.
It looks like you could take advantage of a sub join that finds the MAX statusDate for each partNumber, then joins back to itself so that you can get the corresponding lineStatus value that corresponds to the record with the max date. I just have you inserting/updating a temp table but this can be the general approach you could take.
-- New table that might already exist in your db, I am creating one here
declare #NewTable(
partNumber int,
lineStatus varchar(max),
last_update datetime
)
-- To initially set up your table or to update your table later with new part numbers that were not added before
insert into #NewTable
select tpd.partNumber, tpd.lineStatus, tpd.lineStatusdate
from tblPLCData tpd
join (
select partNumber, MAX(lineStatusdate) lineStatusDateMax
from tblPLCData
group by partNumber
) maxStatusDate on tpd.partNumber = maxStatusDate.partNumber
and tpd.lineStatusdate = maxStatusDate.lineStatusDateMax
left join #NewTable nt on tbd.partNumber = nt.partNumber
where tpd.lineStatus like '_ Zone %' or tpd.lineStatus = 'Production' and nt.partNumber is null
-- To update your table whenever you deem it necessary to refresh it. I try to avoid triggers in my dbs
update nt set nt.lineStatus = tpd.lineStatus, nt.lineStatusdate = tpd.lineStatusDate
from tblPLCData tpd
join (
select partNumber, MAX(lineStatusdate) lineStatusDateMax
from tblPLCData
group by partNumber
) maxStatusDate on tpd.partNumber = maxStatusDate.partNumber
and tpd.lineStatusdate = maxStatusDate.lineStatusDateMax
join #NewTable nt on tbd.partNumber = nt.partNumber
where tpd.lineStatus like '_ Zone %' or tpd.lineStatus = 'Production'

SQL Server, updating item quantities of new items that are replacing old items

I have a CSV with two columns OldItem and NewItem; each column holds a list of integers. Note - the CSV will hold around 1,000 rows.
OldItem | NewItem
-----------------
1021669 | 1167467
1021680 | 1167468
1021712 | 1167466
1049043 | 1000062
We have old items in the system that are being replaced by the new items and we would like to capture the current quantity of the first OldItem and assign it to the first NewItem, quantity of second OldItem assigned to quantity of third OldItem, etc.
The other fun part of the issue is that the Item Numbers that are in the spreadsheet don't match up to the item numbers associated with the quantities, there's a translation table in the system called Alias.
Here are the tables and columns we're interacting with:
table Alias (essentially a translation table)
column Alias (the numbers in the spreadsheet)
column ItemID (the numbers in table "Items" that hold the quantities)
table Items (this holds all the items, new and old)
column ItemID
column Quantity
The only way I can think of doing this is doing a foreach on every OldItem like this, pseudo-code incoming:
foreach OldItem (Select Alias.ItemID WHERE Alias.Alias = OldItem)
then somehow, as I don't know how to return and use that result in SQL:
Select Item.Quantity where Item.ItemID = Alias.ItemID.
At this point I have the quantity that I want, now I have to reference back to the CSV, find the NewItem associated with the OldItem, and do this all over again with the NewItem and then update the NewItem Quantity to the one I found from the OldItem.
-dizzy-
Please help, I could solve this problem by wrapping SQL in PowerShell to handle the logical bits but it has severe performance consequences and I have to do this on MANY databases remotely with very bad network connections!
Given that you have connectivity issues, I suggest the following:
Create a working table in your database
Import your CSV into the working table
Run a script that copies aliases and quantities into the working table. Not required but helps with auditing
Run a script that validates the data
Run a script that copies required data into Items
It's important to note that this assumes that olditems are unique, and only ever map to one new item. There is a checks in the 'testing section' for that
Create a working table
Open SQL Server Management Studio and run this script in your database (choose it in the dropdown)
-- Create a schema to hold working tables that aren't required by the application
CREATE SCHEMA adm;
-- Now create a table in this schema
IF EXISTS (SELECT * FROM sys.objects WHERE name = 'ItemTransfer'
AND type = 'U'
AND schema_id = SCHEMA_ID('adm'))
DROP TABLE adm.ItemTransfer;
CREATE TABLE adm.ItemTransfer (
OldItem INT NOT NULL,
NewItem INT NOT NULL,
OldAlias VARCHAR(50) NULL,
NewAlias VARCHAR(50) NULL,
OldQuantity NUMERIC(19,2) NULL
);
Import the CSV data
There are a number of ways to do this. Your constraint is your unreliable network, and how comfortable you are troubleshooting unfamiliar tools. Here is one method that can be rerun without causing duplicates:
Open your CSV in excel and paste this monstrosity into in column 3, row 2:
="INSERT INTO adm.ItemTransfer (OldItem, NewItem) SELECT " & A2 & "," & B2 & " WHERE NOT EXISTS (SELECT * FROM adm.ItemTransfer WHERE OldItem=" & A2 & " AND NewItem=" & B2 & ");"
This will generate an insert statement for that data. Drag it down to generate all insert statements. There will be a bunch of lines that look something like this:
INSERT INTO adm.ItemTransfer (OldItem, NewItem) SELECT 1,2 WHERE NOT EXISTS (SELECT * FROM adm.ItemTransfer WHERE OldItem=1 AND NewItem=2);
Copy/paste this string of inserts into SQL Server Management Studio and run it. It should insert all of the data into your working table.
I also suggest that you save this file to a .SQL file. This insert statement only inserts if the record isn't already there, so it can be rerun.
Note: There are many ways to import data into SQL Server. the next easiest way is to right click on the database / tasks / import flat file, but it's more complicated to stop duplicates / restarting import
Now you can run SELECT * FROM adm.ItemTransfer and you should see all of your records
Map Alias and Qty
This step can actually be done on the fly but lets just write them into the working table as it will allow us to audit afterwards
These two scripts copy the alias into the working table:
UPDATE adm.ItemTransfer
SET OldAlias = SRC.Alias
FROM
adm.ItemTransfer TGT
INNER JOIN
Alias SRC
ON TGT.OldItem = SRC.ItemID;
UPDATE adm.ItemTransfer
SET NewAlias = SRC.Alias
FROM
adm.ItemTransfer TGT
INNER JOIN
Alias SRC
ON TGT.NewItem = SRC.ItemID;
This one copies in the old item quantity
UPDATE adm.ItemTransfer
SET OldQuantity = SRC.Quantity
FROM
adm.ItemTransfer TGT
INNER JOIN
Items SRC
ON TGT.OldAlias = SRC.ItemID;
After these steps, again run the select statement to inspect.
Pre update check
Before you actually do the update you should check data consistency
Count of records in the staging table:
SELECT
COUNT(*) AS TableCount,
COUNT(DISTINCT OldAlias) UniqueOldAlias,
COUNT(DISTINCT NewAlias) UniqueNewAlias,
FROM adm.ItemTransfer
The numbers should all be the same and should match the CSV record count. If not you have a problem as you are missing records or you are not mapping one to one
This select shows you old items missing an alias:
SELECT * FROM adm.ItemTransfer WHERE OldAlias IS NULL
This select shows you new items missing an alias:
SELECT * FROM adm.ItemTransfer WHERE NewAlias IS NULL
This select shows you old items missing from the item table
SELECT *
FROM adm.ItemTransfer T
WHERE NOT EXISTS (
SELECT * FROM Items I WHERE I.ItemID = T.OldItem)
This select shows you new items missing from the item table
SELECT *
FROM adm.ItemTransfer T
WHERE NOT EXISTS (
SELECT * FROM Items I WHERE I.ItemID = T.NewItem)
Backup the table and do the update
First backup the table inside the database like this:
SELECT *
INTO adm.Items_<dateandtime>
FROM Items
This script makes a copy of the Items table before you update it. You can delete it later if you like
The actual update is pretty simple because we worked it all out in the working table beforehand:
UPDATE Items
SET Quantity = SRC.OldQuantity
FROM Items TGT
INNER JOIN
adm.ItemTransfer SRC
ON SRC.NewAlias = TGT.ItemID;
Summary
All of this can be bundled up into a script and automated if required. As is, you should save all working files to a SQL file, as well as the outputs from the SELECT test statements

SQL-Oracle: Updating table multiple row based on values contained in the same table

I have one table named: ORDERS
this table contains OrderNumber's which belong to the same person and same address lines for that person.
However sometimes the data is inconsistent;
as example looking at the table screenshot: Orders table with bad data to fix -
you all can noticed that orderNumber 1 has a name associated to and addresses line1-2-3-4. sometimes those are all different by some character or even null.
my goal is to update all those 3 lines with one set of data that is already there and set equally all the 3 rows.
to make more clear the result expected should be like this:
enter image description here
i am currently using a MERGE statement to avoid a CURSOR (for loop )
but i am having problems to make it work
here the SQL
MERGE INTO ORDERS O USING
(SELECT
INNER.ORDERNUMBER,
INNER.NAME,
INNER.LINE1,
INNER.LINE2,
INNER.LINE3,
INNER.LINE4
FROM ORDERS INNER
) TEMP
ON( O.ORDERNUMBER = TEMP.ORDERNUMBER )
WHEN MATCHED THEN
UPDATE
SET
O.NAME = TEMP.NAME,
O.LINE1 = TEMP.LINE1,
O.LINE2 = TEMP.LINE2,
O.LINE3 = TEMP.LINE3,
O.LINE4 = TEMP.LINE4;
the biggest issues i am facing is to pick a single row out of the 3 randomly - it does not matter whihc of the data - row i pick to update the line/s
as long i make the records exaclty the same for an order number.
i also used ROWNUM =1 but it in multip[le updates will only output one row and update maybe thousand of lines with the same address and name whihch belong to an order number.
order number is the join column to use ...
kind regards
A simple correlated subquery in an update statement should work:
update orders t1
set (t1.name, t1.line1, t1.line2, t1.line3, t1.line4) =
(select t2.name, t2.line1, t2.line2, t2.line3, t2.line4
from orders t2
where t2.OrderNumber = t1.OrderNumber
and rownum < 2)