Combining 2 Tables with the"Same" Primary Key in a function - sql

i gotta write a function for my database to get the price in the order table from my Article table. I Got 2 tables for articles, 1 for rent items and 1 for sale Items.
Both tables use the Primary Key ItemNo, Sales starting at a ItemNo of 1 and Rent starts with itemNo >=1000.
I wrote the function down below for the table sales and it does also work. I am just not sure how to combine the 2 functions right now to get it from both tables.
Should I use an if-case and wrote the function with if inItemNo>=1000 then function for Rent and else Function Sales or should I use an Join? if I should use a join, I am not quite sure how to use it correctly. May someone can help me.
thanks in advance
DELIMITER $$
create or replace function fn_PurchasingPrice(inItemNo int) returns int
begin
declare OutPurchasingPrice int;
set OutPurchasingPrice=(select ItemNo
from Sales
where ItemNo= inItemNo);
return ifnull(OutPurchasingPrice,-1);
END$$
DELIMITER ;

I think we have a database design issue. Here are some options:
What I understand is that you want a function that simply returns that price of something, a rent or sale item.
These items have different itemNo's (PK) stored in their own tables. This implies that they are different entities. The fact that they share a price field, doesn't mean they are related. This means you might want to either:
Treat them separately and have two functions that return the price for each fn_SalePurchasingPrice and fn_RentPurchasingPrice.
Create a relationship.
If you really don't want a relationship nor two separate function, which I wouldn't really recommend, you can pass 2 PK values into the function, one for each table. You can use it like this: fnPurchasingPrice(6011, 3048) and it would return two values using a union.
select price from table1 where table1_id = value1
union select price from table2 where table2_id = value2
The number of items in each table can exceed 1000 and so the fact that one table's PK starts at 1000 is not useful.

Related

Problem to calculate values from 1 table using group by in firebird

I have a logic problem to calculate the final value of this table:
https://i.stack.imgur.com/YPXXX.png
I need calculate for every row with column TIPO having the value "E" +1 and for "S" -1, grouping by columns Codigo and Configuracao.
Basically, I need a simple stock control, the columns Codigo and Configuracao is product column control, and TIPO is the type of moviment, S = OUT and E = IN
Anyone can give me a light?
untested but maybe this
select SUM(t1.TipoNumeric), t1.CODIGO, t1.CONFIGURACAO from (
select
case (TIPO)
when 'E' then 1
when 'S' then -1
else 0
end as TipoNumeric,
CODIGO,
CONFIGURACAO
from MyTable
) as t1
group by t1.CODIGO, t1.CONFIGURACAO
Just add that +1/-1 column, perhaps?
alter table MyTable
add tipo_val computed by
(
decode( upper(TIPO), 'E', +1, 'S', -1 )
)
https://firebirdsql.org/file/documentation/html/en/refdocs/fblangref25/firebird-25-language-reference.html#fblangref25-ddl-tbl
https://www.firebirdsql.org/refdocs/langrefupd21-intfunc-decode.html
And then:
Select * from MyTable;
Select SUM(tipo_val), CODIGO, CONFIGURACAO
From MyTable
Group by 2, 3
P.S. do not use pictures to show your data.
Instead put them to http://dbfiddle.uk/?rdbms=firebird_3.0 as a script,
and then use Markdown Export there to copy both data and a hyperlink into your question text.
P.P.S. i believe your whole approach is wrong there, if "need a simple stock control".
https://en.wikipedia.org/wiki/Double-entry_bookkeeping
https://medium.com/#RobertKhou/double-entry-accounting-in-a-relational-database-2b7838a5d7f8
I think your table should have columns like that:
surrogate row id, primary key, auto-incrementing integer, 32-bits or 64-bits
columns identifying your item, usually it is, again, a single surrogain integer SKU (Stock Keeping Unit) referencing (see - Foreign Keys) another "dictionary table". In your case it seemes to be two columns Codigo and Configuracao but that also means you can not add extra information ("attributes") about your items, like price or photo (read: database normalization). It also makes grouping harder for Firebird Engine, than using a single integer column. Also, you did created an index on the item-identifying column(s) did you not? What is your query plan on those selects, do they use index on Codigo and Configuracao or an ad hoc external sorting instead?
the timestamp of an operation, that is automatically set by the Firebird server to be current_timestamp, so you always know when exactly that row was inserted. Indexed, of course.
the computer user who added that row, again, automatically set by Firebird server to current_user or to an ID of a user in some stock_workers table you would create. Surely, indexed too.
some description of an operation, like contract number, or seller name, anything that would help you later to remember what real world event that row even describes. Being free form text, it probable would not be indexed. But maybe you would eventually make some contracts or sellers table and add integer references (FK IDs) to those tables? That depends which exactly kind of data would be repeated often enough to be worth extracting into an extra indexed columns.
maybe a unit measure, maybe all your units forever would only be measured in pieces, in integer quantity. But maybe there would be some items measured in kilograms, meters, liters, etc?
finaly two integer (or float?) columns like Qty_Income and Qty_Outcome, where you would record how many items were added or taken from your depot. There would be not that E/S column! There would be two integer columns, that you would put number into one or another. Why? read the articles about bookkeeping above!
In such a database scheme your query would finally look like this:
select Sum(s.Qty_Income) as Credit, Sum(s.Qty_Outcome) as Debit,
Sum(s.Qty_Income) - Sum(s.Qty_Outcome) as Saldo,
min(g.Codigo), min(g.Configuracao)
from stock_movements s
join known_goods g on g.ID = s.SKU_ID
group by s.SKU_ID
And you would also be able to flexibly compose similar requests grouping by workers, or dates, or quantities (like, only care about BIG events like 1000 or more items added in one operation), or anything.

SQL Key Value Pair Query

I have two tables:
Product Table
ID (PK), Description, CategoryID, SegmentID, TypeID, SubTypeID, etc.
Attribute Table
ID (PK), ProductID (FK), Key, Value
And I would like to query these two tables in a join that returns 1 row for each product with all of the Key/Value pair records in the Attribute table returned in a single column, perhaps separated by a pipe character (Key1: Value1 | Key2: Value2 | Key3: Value3 | etc.). Each product could have a different number of key/value pairs, with some products have as few as 2-3 and some having as many as 30. I would like to figure out how to get the query results to look something like this (perhaps selected into a new table):
product.ID, product.Description, [special attributes column], product.CategoryID, product.SegmentID, etc.
example result:
65839, "WonderWidget", "HeightInInches: 26 | WeightInLbs: 5 | Color: Black", "Widgets", "Commerical"
Conversely, it would be helpful to figure out how to take the query results, formatted as mentioned above, and push them back into the original Attribute table. For example, if we output the query above into a table where the [special attributes column] was modified (values updated/corrected by a human), it would be nice to know how to use the table containing the [special attributes column] to update the original Attribute table. I think for that to be possible, the Attribute.ID field would need to be included in the query output.
In the end, what I am trying to accomplish is way to export the Product and Attribute data out to 1 row per product with all the attribute data so that it can be reviewed/updated/corrected by a human in something as simple as an Excel file, and then pushed back into SQL. I think I can figure out how to do all of that once I get over the hurdle of figuring out how to get the products and attributes out as one row per product. Perhaps the correct answer is to pivot all of the attributes into columns, but I'm afraid the query would be incredibly wide and wasteful. Open to suggestions for this as well. Changing to a document type database is not an option right now; need to figure out the best way to handle this in relational SQL.
You first need to group the Key value pairs. This can be achieved using a concat operatoor like ||, you need to think about nulls as well. NUll concatenated with NULL is still NULL in most DBs.
SELECT ProductID, Key || ':' || Value as KeyValue FROM AttributeTable
Then you would need to group those using an aggregating function like STRING_AGG (Assuming SQL Server above 2017). Other databases have different aggregate functions Mysql f ex uses GROUP_CONCAT
https://learn.microsoft.com/en-us/sql/t-sql/functions/string-agg-transact-sql?view=sql-server-2017
https://www.geeksforgeeks.org/mysql-group_concat-function/
SELECT ProductID, STRING_AGG( Key || ':' || Value, '|') as Key Value FROM AttributeTable GROUP BY ProductId
I can expand on the answer if you can provide more information.

User inventory stacking SQL Server 2014

I want to stack items in users their inventory.
If you have a new weapon with the same ItemID, it just creates a new row like in the picture below.
If the ItemID exists then it needs to change the quantity to the number of weapons you've actually got, and delete the duplicate rows.
What type of query is that, or help me on the way?
There are various problems in your table design. First, I do not understand the purpose of InventoryID here. Because the InventoryID is unique (which I would assume to be the primary key, that's why every time when you have an additional itemID (same or not), it will be treated as a new row.
I do not know what you're trying to achieve in the end, but,
Option 1: Create a separate table Inventory with InventoryID and probably ItemID and LeasedUntil, then modify your current table to only CustomerID and ItemID and quantity
Option 2: keep your current table, add another table called Item that has only ItemID and Quantity
You may also want to review table Normalization here: http://support.microsoft.com/kb/283878
Your current table is in 1NF

Basic SQL Insert statement approach

Given that I have two tables
Customer (id int, username varchar)
Order (customer_id int, order_date datetime)
Now I want to insert into Order table based on customer information which is available in Customer table.
There are a couple of ways I can approch this problem.
First - I can query the customer information into a variable and then use it in an INSERT statement.
DECLARE #Customer_ID int
SELECT #Customer_ID = id FROM Customer where username = 'john.smith'
INSERT INTO Orders (customer_id, order_date) VALUES (#Customer_ID, GETDATE())
Second Approach is to use a combination of INSERT and SELECT query.
INSERT INTO Orders (customer_id, order_date)
SELECT id, GETDATE() FROM Customers
WHERE username = 'john.smith'
So my question is that which is a better way to proceed in terms of speed and overhead and why ? I know if we have a lot of information getting queried from Customer table then the second approach is much better.
p.s. I was asked this question in one of the technical interviews.
The second approach is better.
The first approach will fail if the customer is not found. No check is being done to make sure the customer id has been returned.
The second approach will do nothing if the customer is not found.
From an overhead approach why create variables if they are not needed. Set based sql is usually the better approach.
In a typical real-world order-entry system, the user has already looked the Customer up via a Search interface, or has chosen the customer from a list of customers displayed alphabetically; so your client program, when it goes to insert an order for that customer, already knows the CustomerID.
Furthermore, the order date is typically defaulted to getdate() as part of the ORDERS table definition, and your query can usually ignore that column.
But to handle multiple line items on an order, your insert into ORDER_HEADER needs to return the order header id so that it can be inserted into the ORDER DETAIL line item(s) child rows.
I don't recommend either approach. Why do you have the customer name and not the id in the first place? Don't you have a user interface that maintains a reference to the current customer by holding the ID in its state? Doing the lookup by name exposes you to potentially selecting the wrong customer.
If you must do this for reasons unknown to me, the 2nd approach is certainly more efficient because it only contains one statement.
Make the customer id in order table a foreign key which refers to customer table.

Tsql - performing a join on a delimited column - performance and optimisation issue

I have the following (slightly simplified in the columns returned) query.
select Products.Product, Products.ID, Products.Customers
from Products
where Products.orderCompleteDate is null
This would return, as an example
productA 1 Bob
productA 1 Jane
productB 2 John,Dave
Note that Customers can be a comma delimited list. What I want to add, is a column 'Customer Locations', so the above becomes
productA 1 Bob Ireland
productA 1 Jane Wales
productB 2 John,Dave Scotland,England
I created a function below, where fn_split returns a single row per delimited item.
create FUNCTION [dbo].[GetLocations] (#CustomerNames Varchar(256) )
RETURNS #TempLocations table (CustomerLocations varchar(256)) AS begin
declare #NameStr varchar(256)
declare #temp table(singleLoc varchar(256))
insert into #temp
select CustomerLocation.Location from CustomerLocation
INNER JOIN Customers ON Customers.ID = CustomerLocation.ID
INNER JOIN dbo.fn_Split(#CustomerNames,',') split ON split.Item = Customers.Name
SELECT #NameStr = COALESCE(#NameStr + ',', '') + singleLoc
FROM #temp
insert into #TempLocations values (#NameStr)
return
end
And applied it to the original query as follows
select Products.product, Products.ID, Products.Customers, Locations.CustomerLocations
from Products
OUTER APPLY dbo.GetLocations(Products.Customers,',') AS Locations
where Products.orderCompleteDate is null
However, this is extremely slow, with the query taking ~10seconds on a table with a mere 2000 rows (initial query runs almost instantly). This suggests that the query was unable to be optimised, and is being generated row by row. I stayed away from scalar value functions for this reason, and tried to stick to table value functions. Is there any glaring fault in my logic/code?
I'd normally suggest creating a view, based on the unnormalized table, that does the normalization, and then use that as the basis for any future queries. Unfortunately, I can't identify a PK for your current Products table, but you'd hopefully create this view using schemabinding, and hopefully be able to turn it into an indexed view (indexing on PK + customer name).
Querying this view (using Enterprise Edition, or the NOEXPAND option) should then give you comparable performance as if the normalized table existed.
One option would be to create a second table that normalises the product table and keeps it in sync with triggers that call the split function when inserting rows.
Pros are you get standard performance and easy SQL queries
Cons are potential for tables going out of sync should anything go wrong (can always schedule a job to rebuild new table from scratch periodically)
Obviously best answer would be to redesign product table but assume that's not possible for you to be messing with split functions etc.