I didn't find a working solution for creating a "lookup column" in a Firebird database.
Here is an example:
Table1: Orders
[OrderID] [CustomerID] [CustomerName]
Table2: Customers
[ID] [Name]
When I run SELECT * FROM ORDERS I want to get OrderID, CustomerID and CustomerName....but CustomerName should automatically be computed by looking for the "CustomerID" in the "ID" column of "Customer" Table, returning the content of the "Name" column.
Firebird has calculated fields (generated always as/computed by), and these allow selecting from other tables (contrary to an earlier version of this answer, which stated that Firebird doesn't support this).
However, I suggest you use a view instead, as I think it performs better (haven't verified this, so I suggest you test this if performance is important).
Use a view
The common way would be to define a base table and an accompanying view that gathers the necessary data at query time. Instead of using the base table, people would query from the view.
create view order_with_customer
as
select orders.id, orders.customer_id, customer.name
from orders
inner join customer on customer.id = orders.customer_id;
Or you could just skip the view and use above join in your own queries.
Alternative: calculated fields
I label this as an alternative and not the main solution, as I think using a view would be the preferable solution.
To use calculated fields, you can use the following syntax (note the double parentheses around the query):
create table orders (
id integer generated by default as identity primary key,
customer_id integer not null references customer(id),
customer_name generated always as ((select name from customer where id = customer_id))
)
Updates to the customer table will be automatically reflected in the orders table.
As far as I'm aware, the performance of this option is less than when using a join (as used in the view example), but you might want to test that for yourself.
FB3+ with function
With Firebird 3, you can also create calculated fields using a trigger, this makes the expression itself shorter.
To do this, create a function that selects from the customer table:
create function lookup_customer_name(customer_id integer)
returns varchar(50)
as
begin
return (select name from customer where id = :customer_id);
end
And then create the table as:
create table orders (
id integer generated by default as identity primary key,
customer_id integer not null references customer(id),
customer_name generated always as (lookup_customer_name(customer_id))
);
Updates to the customer table will be automatically reflected in the orders table. This solution can be relatively slow when selecting a lot of records, as the function will be executed for each row individually, which is a lot less efficient than performing a join.
Alternative: use a trigger
However if you want to update the table at insert (or update) time with information from another table, you could use a trigger.
I'll be using Firebird 3 for my answer, but it should translate - with some minor differences - to earlier versions as well.
So assuming a table customer:
create table customer (
id integer generated by default as identity primary key,
name varchar(50) not null
);
with sample data:
insert into customer(name) values ('name1');
insert into customer(name) values ('name2');
And a table orders:
create table orders (
id integer generated by default as identity primary key,
customer_id integer not null references customer(id),
customer_name varchar(50) not null
)
You then define a trigger:
create trigger orders_bi_bu
active before insert or update
on orders
as
begin
new.customer_name = (select name from customer where id = new.customer_id);
end
Now when we use:
insert into orders(customer_id) values (1);
the result is:
id customer_id customer_name
1 1 name1
Update:
update orders set customer_id = 2 where id = 1;
Result:
id customer_id customer_name
1 2 name2
The downside of a trigger is that updating the name in the customer table will not automatically be reflected in the orders table. You would need to keep track of these dependencies yourself, and create an after update trigger on customer that updates the dependent records, which can lead to update/lock conflicts.
No need here a complex lookup field.
No need to add a persistant Field [CustomerName] on Table1.
As Gordon said, a simple Join is enough :
Select T1.OrderID, T2.ID, T2.Name
From Customers T2
Join Orders T1 On T1.IDOrder = T2.ID
That said, if you want to use lookup Fields (as we do it on a Dataset) with SQL you can use some thing like :
Select T1.OrderID, T2.ID,
( Select T3.YourLookupField From T3 where (T3.ID = T2.ID) )
From Customers T2 Join Orders T1 On T1.IDOrder = T2.ID
Regards.
Related
I have a table that contains all the transaction information
(transaction date, customer name, transaction amount, unit amount, store ID).
How to create a table which could properly show the stores that customer has visited? The customer could make purchase from multiple stores, so the relationship between customer and store should be one to many. I am trying to check the stores each customer had visited.
Example:
Transaction table
store_ID
transaction_date
customer_name
transaction_amount
transaction_unit
Expected Output
customer_name
store_list
This is just a Hypothesis problem. Maybe list all shopped store seprately could be better? (But I guess it might create chaos if we want to check customer who made purchase in those stores). Thanks in advance :)
Since you already have a table with all information needed, maybe you are asking for a view, to avoid denormalized/duplicated data.
Example is shown below. Here I will use sqlite syntax (text instead of varchar, etc). We are normalizing data to avoid problems - for example you have shown a table with customer names, but different customers can have the same name, so we use customer_id instead.
First the tables:
create table if not exists store (id integer primary key, name text, address text);
create table if not exists customer (id integer, name text, document integer);
create table if not exists unit (symbol text primary key);
insert into unit values ('USD'),('EUR'),('GBP'); -- static data
create table if not exists transact (id integer primary key, store_id integer references store(id), customer_id integer references customer(id), transaction_date date, amount integer, unit text references unit(symbol));
-- transaction is a reserved word, so transact was used instead
Now the view:
-- store names concatenated with ',' using sqlite's group_concat
create view if not exists stores_visited as select cust.name, group_concat(distinct store.name order by store.id) from transact trx join customer cust on trx.customer_id=cust.id join store on trx.store_id=store.id group by cust.name order by cust.id;
-- second version, regular select with multiple lines
-- create view if not exists stores_visited as select cust.id, cust.name, store.id, store.name from transact trx join customer cust on trx.customer_id=cust.id join store on trx.store_id=store.id;
Sample data:
insert into store values
(1,'store one','address one'),
(2,'store two','address two')
(3,'store three','address three');
insert into customer values
(1,'customer one','custdoc one'),
(2,'customer two','custdoc two'),
(3,'customer three','custdoc three');
insert into transact values
(1,1,1,date('2021-01-01'),1,'USD'),
(2,1,1,date('2021-01-02'),1,'USD'),
(3,1,2,date('2021-01-03'),1,'USD'),
(5,1,3,date('2021-01-04'),1,'USD'),
(6,2,1,date('2021-01-05'),1,'USD'),
(7,2,3,date('2021-01-06'),1,'USD'),
(8,3,2,date('2021-01-07'),1,'USD');
Testing:
select * from stores_visited;
-- customer one|store one,store one,store two
-- customer three|store one,store two
-- customer two|store one
I am using sqlite3.
I have one "currencies" table, and two tables that reference the currencies table using a foreign key, as follows:
CREATE TABLE currencies (
currency TEXT NOT NULL PRIMARY KEY
);
CREATE TABLE table1 (
currency TEXT NOT NULL PRIMARY KEY,
FOREIGN KEY(currency)
REFERENCES currencies(currency)
);
CREATE TABLE table2 (
currency TEXT NOT NULL PRIMARY KEY,
FOREIGN KEY(currency)
REFERENCES currencies(currency)
);
I would like to make sure that rows in the "currencies" table that are not referenced by any row from "table1" and "table2" will be removed automatically. This should behave like some kind of ref-counted object. When the reference count reaches zero, the relevant row from the "currencies" table should be erased.
What is the "SQL way" to solve this problem?
I am willing to redesign my tables if it could lead to an elegant solution.
I prefer to avoid solutions that require extra work from the application side, or solutions that require periodic cleanup.
Create an AFTER DELETE TRIGGER in each of table1 and table2:
CREATE TRIGGER remove_currencies_1 AFTER DELETE ON table1
BEGIN
DELETE FROM currencies
WHERE currency = OLD.currency
AND NOT EXISTS (SELECT 1 FROM table2 WHERE currency = OLD.currency);
END;
CREATE TRIGGER remove_currencies_2 AFTER DELETE ON table2
BEGIN
DELETE FROM currencies
WHERE currency = OLD.currency
AND NOT EXISTS (SELECT 1 FROM table1 WHERE currency = OLD.currency);
END;
Every time that you delete a row in either table1 or table2, the trigger involved will check the other table if it contains the deleted currency and if it does not contain it, it will be deleted from currencies.
See the demo.
There is no automatic way of doing this. The reverse can be handling using cascading delete foreign key references. The reverse is that when a currency is deleted all related rows are.
You could schedule a job daily running something like:
delete from currencies c
where not exists (select 1 from table1 t1 where t1.currency = c.currency) and
not exists (select 1 from table2 t2 where t2.currency = c.currency);
If you need an automatic way for doing that, then most dbms provide a trigger mechanism. You can create a trigger on update and delete operations that run the folowing query:
you can use a left join for that:
https://www.w3schools.com/sql/sql_join_left.asp
It return a row for all rows from the left table, even if there is no corresponding row in the right table, replacing the rows form the right with null. You can then check a not null right table field for null with is null. This will filter for the rows the have no counterpart in the right table.
For example:
SELECT currencies.currency FROM currencies LEFT JOIN table1 WHERE table1.currency IS NULL
will show the relevant rows for table1.
You can do the same with table two.
This will give you two queries, that shows which rows have no couterpart.
You can then use intersect on the result, so that you have the rows that have not couterpart in either:
SELECT * FROM query1 INTERSECT SELECT * FROM query2
Now you have the list of currencies to be deleted.
You can finish this by using a subqueried delete:
DELETE FROM currencies WHERE currency IN (SELECT ...)
I have 2 databases in SQL Server, DB1 has multiple tables and some of the tables are updated with new records continuously. DB2 has only 1 table which should contain all the combined info from the multiple tables in DB1, and needs to be updated every 2 hours.
For example, DB1 has 3 tables: "ProductInfo", "StationRecord", "StationInfo". The first 2 tables both have a timestamp column that indicates when a record is created (i.e. the two tables are updated asynchronously, ONLY when a product passes all stations in "StationRecord" will "ProductInfo" be updated with a new product), and the last table is fixed.
The tables are as follows:
USE DB1
GO
CREATE TABLE ProductInfo
ProductID bigint Primary Key NOT NULL
TimeCreated datetime
ProductName nvarchar(255)
CREATE TABLE StationRecord
RecordID bigint Primary Key NOT NULL
TimeCreated datetime
ProductID bigint NOT NULL
StationID bigint
CREATE TABLE StationInfo
StationID bigint Primary Key NOT NULL
BOM_used nvarchar(255)
DB2 has only 1 table which contains a composite PK of "ProductID" & "StationID", as follows:
CREATE TABLE DB2.BOMHistory AS
SELECT
DB1.ProductInfo.ProductID
DB1.ProductInfo.TimeCreated AS ProductCreated
DB1.StationInfo.StationID
DB1.StationInfo.BOM_used
FROM DB1.ProductInfo
JOIN DB1.StationRecord
ON DB1.ProductInfo.ProductID = DB1.StationRecord.ProductID
JOIN DB1.StationInfo
ON DB1.StationRecord.StationID = DB1.StationInfo.StationID
constraint PK_BOMHistory Primary Key (ProductID,StationID)
I figured out the timing portion which is to use create a job with some pre-set schedules, and the job is to execute a stored procedure. The problem is how to write the stored procedure properly, which has to do the following things:
wait for the last product to pass all stations (and the "stationInfo" table is updated fully)
find all NEW records generated in this cycle in the tables in DB1
combine the information of the 3 tables in DB1
insert the combined info into DB2.BOMHistory
Here's my code:
ALTER Procedure BOMHistory_Proc
BEGIN
SELECT
DB1.ProductInfo.ProductID,
DB1.ProductInfo.TimeCreated AS ProductCreated
DB1.StationInfo.StationID,
DB1.StationInfo.BOM_used
into #temp_BOMList
FROM DB1.ProductInfo
JOIN DB1.StationRecord
ON DB1.ProductInfo.ProductID = DB1.StationRecord.ProductID
JOIN DB1.StationInfo
ON DB1.StationRecord.StationID = DB1.StationInfo.StationID
ORDER BY ProductInfo.ProductID
END
SELECT * from #temp_BOMList
INSERT INTO DB2.BOMHistory(ProductID, ProductCreated, StationID, BOM_used)
SELECT DISTINCT (ProductID, stationID)
FROM #temp_BOMList
WHERE (ProductID, stationID) NOT IN (SELECT ProductID, stationID FROM DB2.BOMHistory)
The Condition in the INSERT statement is not working, please provide some advice.
Also, should I use a table variable or a temp table for this application?
Try:
INSERT INTO DB2.BOMHistory(ProductID, ProductCreated, StationID, BOM_used)
SELECT DISTINCT tb.ProductID, tb.ProductCreated, tb.StationID, tb.BOM_used
FROM #temp_BOMList tb
WHERE NOT EXISTS
(SELECT * FROM DB2.BOMHistory WHERE ProductID = tb.ProductID AND StationID = tb.StationID)
I was trying to delete a record from my stock table if the update in the same table results in quantity 0 using two CTEs.
The upserts are working, but the delete is not generating the result I was expecting. the quantity in stock table is changing to zero but the record is not being deleted.
Table structure:
CREATE TABLE IF NOT EXISTS stock_location (
stock_location_id SERIAL
, site_code VARCHAR(10) NOT NULL
, location_code VARCHAR(50) NOT NULL
, status CHAR(1) NOT NULL DEFAULT 'A'
, CONSTRAINT pk_stock_location PRIMARY KEY (stock_location_id)
, CONSTRAINT ui_stock_location__keys UNIQUE (site_code, location_code)
);
CREATE TABLE IF NOT EXISTS stock (
stock_id SERIAL
, stock_location_id INT NOT NULL
, item_code VARCHAR(50) NOT NULL
, quantity FLOAT NOT NULL
, CONSTRAINT pk_stock PRIMARY KEY (stock_id)
, CONSTRAINT ui_stock__keys UNIQUE (stock_location_id, item_code)
, CONSTRAINT fk_stock__stock_location FOREIGN KEY (stock_location_id)
REFERENCES stock_location (stock_location_id)
ON DELETE CASCADE ON UPDATE CASCADE
);
This is how the statement looks like:
WITH stock_location_upsert AS (
INSERT INTO stock_location (
site_code
, location_code
, status
) VALUES (
inSiteCode
, inLocationCode
, inStatus
)
ON CONFLICT ON CONSTRAINT ui_stock_location__keys
DO UPDATE SET
status = inStatus
RETURNING stock_location_id
)
, stock_upsert AS (
INSERT INTO stock (
stock_location_id
, item_code
, quantity
)
SELECT
slo.stock_location_id
, inItemCode
, inQuantity
FROM stock_location_upsert slo
ON CONFLICT ON CONSTRAINT ui_stock__keys
DO UPDATE SET
quantity = stock.quantity + inQuantity
RETURNING stock_id, quantity
)
DELETE FROM stock stk
USING stock_upsert stk2
WHERE stk.stock_id = stk2.stock_id
AND stk.quantity = 0;
Does anyone know what's going on?
This is an example of what I'm trying to do:
DROP TABLE IF EXISTS test1;
CREATE TABLE IF NOT EXISTS test1 (
id serial
, code VARCHAR(10) NOT NULL
, description VARCHAR(100) NOT NULL
, quantity INT NOT NULL
, CONSTRAINT pk_test1 PRIMARY KEY (id)
, CONSTRAINT ui_test1 UNIQUE (code)
);
-- UPSERT
WITH test1_upsert AS (
INSERT INTO test1 (
code, description, quantity
) VALUES (
'01', 'DESC 01', 1
)
ON CONFLICT ON CONSTRAINT ui_test1
DO UPDATE SET
description = 'DESC 02'
, quantity = 0
RETURNING test1.id, test1.quantity
)
DELETE FROM test1
USING test1_upsert
WHERE test1.id = test1_upsert.id
AND test1_upsert.quantity = 0;
The second time the UPSERT command runs, it should delete the record from test1 once the quantity will be updated to zero.
Makes sense?
Here, DELETE is working in the way it was designed to work. The answer is actually pretty straightforward and documented. I've experienced the same behaviour years ago.
The reason your delete is not actually removing the data is because your where condition doesn't match with what's stored inside the table as far as what the delete statement sees.
All sub-statements within CTE (Common Table Expression) are executed with the same snapshot of data, so they can't see other statement effect on target table. In this case, when you run UPDATE and then DELETE, the DELETE statement sees the same data that UPDATE did, and doesn't see the updated data that UPDATE statement modified.
How can you work around that? You need to separate UPDATE & DELETE into two independent statements.
In case you need to pass the information about what to delete you could for example (1) create a temporary table and insert the data primary key that has been updated so that you can join to that in your latter query (DELETE based on data that was UPDATEd). (2) You could achieve the same result by simply adding a column within the updated table and changing its value to mark updated rows or (3) however you like it to get the job done. You should get the feeling of what needs to be done by above examples.
Quoting the manual to support my findings:
7.8.2. Data-Modifying Statements in WITH
The sub-statements in WITH are executed concurrently with each other
and with the main query. Therefore, when using data-modifying
statements in WITH, the order in which the specified updates actually
happen is unpredictable. All the statements are executed with the same
snapshot (see Chapter 13), so they cannot “see” one another's effects
on the target tables.
(...)
This also applies to deleting a row that was already updated in the same statement: only the update is performed
Adding to the helpful explanation above... Whenever possible it is absolutely best to break out modifying procedures into their own statements.
However, when the CTE has multiple modifying procedures that reference the same subquery and temporary tables are unideal (such as in stored procedures) then you just need a good solution.
In that case if you'd like a simple trick about how to go about ensuring a bit of order, consider this example:
WITH
to_insert AS
(
SELECT
*
FROM new_values
)
, first AS
(
DELETE FROM some_table
WHERE
id in (SELECT id FROM to_insert)
RETURNING *
)
INSERT INTO some_other_table
SELECT * FROM new_values
WHERE
exists (SELECT count(*) FROM first)
;
The trick here is the exists (SELECT count(*) FROM first) part which must be executed first before the insert can happen. This is a way (which I wouldn't consider too hacky) to enforce an order while keeping everything within one CTE.
But this is just the concept - there are more optimal ways of doing the same thing for a given context.
I have a data processor that would create a table from a select query.
<_config:table definition="CREATE TABLE TEMP_TABLE (PRODUCT_ID NUMBER NOT NULL, STORE NUMBER NOT NULL, USD NUMBER(20, 5),
CAD NUMBER(20, 5), Description varchar(5), ITEM_ID VARCHAR(256), PRIMARY KEY (ITEM_ID))" name="TEMP_TABLE"/>
and the select query is
<_config:query sql="SELECT DISTINCT ce.PRODUCT_ID, ce.STORE, op.USD ,op.CAD, o.Description, ce.ITEM_ID
FROM PRICE op, PRODUCT ce, STORE ex, OFFER o, SALE t
where op.ITEM_ID = ce.ITEM_ID and ce.STORE = ex.STORE
and ce.PRODUCT_ID = o.PRODUCT_ID and o.SALE_ID IN (2345,1234,3456) and t.MEMBER = ce.MEMBER"/>
When I run that processor, I get an unique constraint error, though I have a distinct in my select statement.
I tried with CREATE TABLE AS (SELECT .....) its creating fine.
Is it possible to get that error? I'm doing a batch execute so not able to find the individual record.
The select distinct applies to the entire row, not to each column individually. So, two rows could have the same value of item_id but be different in the other columns.
The ultimate fix might be to have a group by item_id in the query, instead of select distinct. That would require other changes to the logic. Another possibility would be to use row_number() in a subquery and select the first row.