So I have the following problem:
I have 2 tables, one containing different bids for a product_type, and one containing the price, date etc. to which the product was sold.
The tables look like this:
Table bids:
+----------+---------------------+---------------------+--------------+-------+
| Bid_id | Start_time | End_time | Product_type | price |
+----------+---------------------+---------------------+--------------+-------+
| 1 | 18.01.2020 06:00:00 | 18.01.2020 06:02:33 | blue | 5 € |
| 2 | 18.01.2020 06:00:07 | 18.01.2020 06:00:43 | blue | 7 € |
| 3 | 18.01.2020 06:01:10 | 19.01.2020 15:03:15 | red | 3 € |
| 4 | 18.01.2020 06:02:20 | 18.01.2020 06:05:44 | blue | 6 € |
| | | | | |
+----------+---------------------+---------------------+--------------+-------+
Table sells:
+---------+---------------------+--------------+--------+
| Sell_id | Sell_time | Product_type | Price |
+---------+---------------------+--------------+--------+
| 1 | 18.01.2020 06:00:31 | Blue | 6,50 € |
| 2 | 18:01.2020 06:51:03 | Red | 2,50 € |
| | | | |
+---------+---------------------+--------------+--------+
The sell_id and the bid_id have no relation with each other.
What I want to find out is, what is the maximum bid to the time we sold the product_type. So if we take sell_id 1, it should check, which bids for this specific product_type were active during the sell_time (in this case bid_id 1 and 2) and give back the higher price (in this case bid_id 2).
I tried to solve this problem in Power Bi, however, I was not able to get a solution. I assume, that I have to work with SQL-Joins to solve it.
Is it possible, to join based on criteria instead of matching columns? Something like:
SELECT bids.start_time, bids.end_time, bids.product_type, MAX(bids.price), sells.sell_time, sells.product_type, sells.price
FROM sells
INNER JOIN bids ON bids.start_time<sells.sell_time AND bids.end_time > sells.sell_time;
I am sorry if this question is confusing, I am still new to this sorry. Thanks in advance for ANY help!
Your sample data Sell_time should be 18.01.2020, right? You Can try this code (can be resource-intensive in relation to the amount of data due to Cartesian joins). If you are sure that Sell day is always in Bid Start day, then you can add date column to yours tables and use additional TREATAS(VALUE(bids[day], sells[day])
Test =
VAR __tretasfilter =
TREATAS ( VALUES ( bids[Product_type] ), sells[Product_type] )
RETURN
SUMMARIZE (
FILTER (
SUMMARIZECOLUMNS (
sells[Sell_id],
bids[Price],
bids[Start_time],
sells[Sell_time],
bids[End_time],
sells[Product_type],
__tretasfilter
),
[Start_time] <= [Sell_time]
&& [End_time] >= [Sell_time]
),
sells[Sell_id],
"MaxPrice", MAX ( bids[Price] )
)
Related
I have a Production Table and a Standing Data table. The relationship of Production to Standing Data is actually Many-To-Many which is different to how this relationship is usually represented (Many-to-One).
The standing data table holds a list of tasks and the score each task is worth. Tasks can appear multiple times with different "ValidFrom" dates for changing the score at different points in time. What I am trying to do is query the Production Table so that the TaskID is looked up in the table and uses the date it was logged to check what score it should return.
Here's an example of how I want the data to look:
Production Table:
+----------+------------+-------+-----------+--------+-------+
| RecordID | Date | EmpID | Reference | TaskID | Score |
+----------+------------+-------+-----------+--------+-------+
| 1 | 27/02/2020 | 1 | 123 | 1 | 1.5 |
| 2 | 27/02/2020 | 1 | 123 | 1 | 1.5 |
| 3 | 30/02/2020 | 1 | 123 | 1 | 2 |
| 4 | 31/02/2020 | 1 | 123 | 1 | 2 |
+----------+------------+-------+-----------+--------+-------+
Standing Data
+----------+--------+----------------+-------+
| RecordID | TaskID | DateActiveFrom | Score |
+----------+--------+----------------+-------+
| 1 | 1 | 01/02/2020 | 1.5 |
| 2 | 1 | 28/02/2020 | 2 |
+----------+--------+----------------+-------+
I have tried the below code but unfortunately due to multiple records meeting the criteria, the production data duplicates with two different scores per record:
SELECT p.[RecordID],
p.[Date],
p.[EmpID],
p.[Reference],
p.[TaskID],
s.[Score]
FROM ProductionTable as p
LEFT JOIN StandingDataTable as s
ON s.[TaskID] = p.[TaskID]
AND s.[DateActiveFrom] <= p.[Date];
What is the correct way to return the correct and singular/scalar Score value for this record based on the date?
You can use apply :
SELECT p.[RecordID], p.[Date], p.[EmpID], p.[Reference], p.[TaskID], s.[Score]
FROM ProductionTable as p OUTER APPLY
( SELECT TOP (1) s.[Score]
FROM StandingDataTable AS s
WHERE s.[TaskID] = p.[TaskID] AND
s.[DateActiveFrom] <= p.[Date]
ORDER BY S.DateActiveFrom DESC
) s;
You might want score basis on Record Level if so, change the where clause in apply.
I'm working on my senior High School Project and am reaching out to the community for help! (As my teacher doesn't know the answer to my question).
I have a simple "Products" table as shown below:
I also have a "Orders" table shown below:
Is there a way I can create a field in the "Orders" table named "Total Cost", and make that automaticly calculate the total cost from all the products selected?
Firstly, I would advise against storing calculated values, and would also strongly advise against using calculated fields in tables. In general, calculations should be performed by queries.
I would also strongly advise against the use of multivalued fields, as your images appear to show.
In general, when following the rules of database normalisation, most sales databases are structured in a very similar manner, containing with the following main tables (amongst others):
Products (aka Stock Items)
Customers
Order Header
Order Line (aka Order Detail)
A good example for you to learn from would be the classic Northwind sample database provided free of charge as a template for MS Access.
With the above structure, observe that each table serves a purpose with each record storing information pertaining to a single entity (whether it be a single product, single customer, single order, or single order line).
For example, you might have something like:
Products
Primary Key: Prd_ID
+--------+-----------+-----------+
| Prd_ID | Prd_Desc | Prd_Price |
+--------+-----------+-----------+
| 1 | Americano | $8.00 |
| 2 | Mocha | $6.00 |
| 3 | Latte | $5.00 |
+--------+-----------+-----------+
Customers
Primary Key: Cus_ID
+--------+--------------+
| Cus_ID | Cus_Name |
+--------+--------------+
| 1 | Joe Bloggs |
| 2 | Robert Smith |
| 3 | Lee Mac |
+--------+--------------+
Order Header
Primary Key: Ord_ID
Foreign Keys: Ord_Cust
+--------+----------+------------+
| Ord_ID | Ord_Cust | Ord_Date |
+--------+----------+------------+
| 1 | 1 | 2020-02-16 |
| 2 | 1 | 2020-01-15 |
| 3 | 2 | 2020-02-15 |
+--------+----------+------------+
Order Line
Primary Key: Orl_Order + Orl_Line
Foreign Keys: Orl_Order, Orl_Prod
+-----------+----------+----------+---------+
| Orl_Order | Orl_Line | Orl_Prod | Orl_Qty |
+-----------+----------+----------+---------+
| 1 | 1 | 1 | 2 |
| 1 | 2 | 3 | 1 |
| 2 | 1 | 2 | 1 |
| 3 | 1 | 1 | 4 |
| 3 | 2 | 3 | 2 |
+-----------+----------+----------+---------+
You might also opt to store the product description & price on the order line records, so that these are retained at the point of sale, as the information in the Products table is likely to change over time.
I have a table with following stock data where we have couple of columns like date, ticker, open and close(stock prices).
To query this data, I want to know which stock has given the highest margin on particular date. So if I have 516 different stocks, my query should return 516 rows of ticker, date, open, close and a new column Margin(which will be max(close-open)).
| deep_stocks.date_ | deep_stocks.ticker | deep_stocks.open | deep_stocks.close |
+--------------------+---------------------+-------------------+--------------------+--+
| 20100721 | A | 27.68 | 27.58 |
| 20100722 | A | 27.95 | 28.72 |
| 20100723 | A | 28.56 | 29.3 |
| 20100726 | A | 29.22 | 29.64 |
| 20100727 | A | 29.73 | 28.87 |
| 20100728 | A | 28.79 | 28.78 |
| 20100729 | A | 28.97 | 28.15 |
| 20100730 | A | 27.78 | 27.93 |
| 20100802 | A | 28.35 | 28.82 |
| 20100803 | A | 28.7 | 27.84 |
I have written a query where my approach was:
Step 1 - Get the difference between Close and Open prices (Inner/Sub query)
Step 2 - Get the maximum of margin for every stock (used group by with max function)
Step 3 - Join the results with Main Table and get the data.
I'll put my query in solution or comments can someone please correct it as it is taking more time. Also I would like to know can we have any other alternative approach.
As already told about my approach please find below query:
SELECT ds.ticker, ds.date_, ds.close, ds.open, ds.Margin FROM
(SELECT ticker, date_, close, open, case(close-open)>0 when true then round(close-open,2) else 0 end as Margin FROM DataStocks) ds
JOIN
(SELECT dsIn.ticker, max(dsIn.Margin) mxMargin FROM
(select ticker, case(close-open)>0 when true then round(close-open,2) else 0 end as Margin FROM DataStocks ) dsIn group by dsIn.ticker) dsEx
ON ds.ticker=dsEx.ticker AND ds.Margin=dsEx.mxMargin ORDER BY ds.Margin;
Do we have any other alternatives for this query or can it be possible to optimize it.
I have a need to aggregate a sum over contiguous dates. I've seen solutions to similar problems that will return the start and end dates, but don't have a need to aggregate the data between those ranges. It's further complicated by the extremely large amounts of data involved, to the point that a simple self join takes an impractical amount of time (especially since the start and end date fields are unindexed)
I have a solution involving cursors, but I've generally been led to believe that cursors can always be more efficiently replaced with joins that will execute faster, but so far every solution I've tried with a query anywhere close to giving me the data I need takes an hour at least, and my cursor solutions takes about 10 seconds. So I'm asking if there is a more efficient answer.
And the data includes both buy and sell transactions and each row of aggregated contiguous dates returned also needs to list the transaction ID of the last sell that occurred before the first buy of the contiguous set of buy transactions.
An example of the data:
+------------------+------------+------------+------------------+--------------------+
| TRANSACTION_TYPE | TRANS_ID | StartDate | EndDate | Amount |
+------------------+------------+------------+------------------+--------------------+
| sell | 100 | 2/16/16 | 2/18/18 | $100.00 |
| sell | 101 | 3/1/16 | 6/6/16 | $121.00 |
| buy | 102 | 6/10/16 | 6/12/16 | $22.00 |
| buy | 103 | 6/12/16 | 6/14/16 | $0.35 |
| buy | 104 | 6/29/16 | 7/2/16 | $5.00 |
| sell | 105 | 7/3/16 | 7/6/16 | $115.00 |
| buy | 106 | 7/8/16 | 7/9/16 | $200.00 |
| sell | 107 | 7/10/16 | 7/13/16 | $4.35 |
| sell | 108 | 7/17/16 | 7/20/16 | $0.50 |
| buy | 109 | 7/25/16 | 7/29/16 | $33.00 |
| buy | 110 | 7/29/16 | 8/1/16 | $75.00 |
| buy | 111 | 8/1/16 | 8/3/16 | $0.33 |
| sell | 112 | 9/1/16 | 9/2/16 | $99.00 |
+------------------+------------+------------+------------------+--------------------+
Should have results like the following:
+------------+------------+------------------+--------------------+
| Last_Sell | StartDate | EndDate | Amount |
+------------+------------+------------------+--------------------+
| 101 | 6/10/16 | 6/14/18 | $22.35 |
| 101 | 6/29/16 | 7/2/16 | $5.00 |
| 105 | 7/8/16 | 7/9/16 | $200.00 |
| 108 | 7/25/16 | 8/3/16 | $108.33 |
+------------------+------------+------------+--------------------+
Right now I use queries to split the data into buys and sells, and just walk through the buy data, aggregating as I go, inserting into the return table every time I find a break in the dates, and I step through the sell table until I reach the last sell before the start date of the set of buys.
Walking linearly through cursors gives me a computational time of n. Even though cursors are orders of magnitude less efficient, it's still calculating in n, while I suspect the joins I would need to do would give me at least n log n. With the ridiculous amount of data I'm working with, the inefficiencies of cursors get swamped if it goes beyond linear time.
If I assume that the transaction id increases along with the dates, then you can get the last sales date using a cumulative max. Then, the adjacency can be found by using similar logic, but with a lag first:
with cte as (
select t.*,
sum(case when transaction_type = 'sell' then trans_id end) over
(order by trans_id) as last_sell,
lag(enddate) over (partition by transaction_type order by trans_id) as prev_enddate
from t
)
select last_sell, min(startdate) as startdate, max(enddate) as enddate,
sum(amount) as amount
from (select cte.*,
sum(case when startdate = dateadd(day, 1, prev_enddate) then 0 else 1 end) over (partition by last_sell order by trans_id) as grp
from cte
where transaction_type = 'buy'
) x
group by last_sell, grp;
Disclaimers first: I'm dealing with a legacy database with a pretty bizarre schema. Plus, I'm a complete SQL noob, so that's not helping either.
Basically, I have a table that has product variations. A good example might be t-shirts. The "generic" t-shirt has a product id. Each type of variation (size, color) has an id. Each value of the variation (red, small) has an id.
So table looks like:
+----+----------+-----------+-------------+----------+------------+
| id | tshirt | option_id | option_name | value_id | value_name |
+----+----------+-----------+-------------+----------+------------+
| 1 | Zombies! | 2 | color | 13 | red |
| 1 | Zombies! | 2 | color | 24 | black |
| 1 | Zombies! | 3 | size | 35 | small |
| 1 | Zombies! | 3 | size | 36 | medium |
| 1 | Zombies! | 3 | size | 56 | large |
| 2 | Ninja! | 2 | color | 24 | black |
| 2 | Ninja! | 3 | size | 35 | small |
+----+----------+-----------+-------------+----------+------------+
I want to write a query that retrieves the different combinations for a given product.
In this example, the Zombie shirt comes in Red/Small, Red/Medium, Red/Large, Black/Small, Black/Medium, and Black/Large (six variations). The Ninja shirt just has the one variation: Black/Small.
I believe this is the cartesian product of size and color.
Those ids are really foreign keys to other tables, so those names/values aren't but wanted to include for clarity.
The number of options can vary (not limited to two) and the number of values per option can vary as well. Ultimately, the numbers are likely to small-ish for a given product so I'm not worried about millions of rows here.
Any ideas on how I might do this?
Thanks,
p.
try this:
select
f.id,
f.tshirt,
color.option_id as color_option_id,
color.option_name as color_option_name,
color.value_id as color_value_id,
color.value_name as color_value_name,
size.option_id as size_option_id,
size.option_name as size_option_name,
size.value_id as size_value_id,
size.value_name as size_value_name
from
foo f
inner join foo color on f.id = color.id and f.value_id = color.value_id and color.option_id = 2
inner join foo size on f.id = size.id and size.option_id = 3
order by
f.id,
color.option_id, color.value_id,
size.value_id, size.value_id;
Looks like distinct can do the trick:
select distinct tshirt
, option_name
, value_name
from YourTable