I have been trying to follow this solution ( Convert tick data to candlestick (OHLC) with SQL ) to fit my needs for my home project which has SQL Server Express Localdb as database. My SQL knowledge is a bit unsharp so hoping for help :-)
I have a price, a float (53), (e.g. value 109,2) and a time, a datetime, (e.g. value 2021-02-11 21:26:45.000)
I need to get candlesticks per minute.
Then I have this T-SQL:
SELECT
t1.price as open,
m.high,
m.low,
t2.price as close,
open_time
FROM
(SELECT
MIN(Publication_time) AS min_time,
MAX(Publication_time) AS max_time,
MIN(price) AS low,
MAX(price) AS high,
FLOOR((CAST(DATEDIFF(s, Publication_time, GETUTCDATE()) AS BIGINT) * 1000) / (1000 * 60)) AS open_time
FROM
stocks
GROUP BY
open_time) m
JOIN
stocks t1 ON t1.Publication_time = min_time
JOIN
stocks t2 ON t2.Publication_time = max_time
It is parsed alright, but I get an error
Invalid column name 'open_time'
on execution. What is the correct way to do this?
A common way to avoid repeating the same calculation is to calculate it in a cross apply e.g.
SELECT
MIN(Publication_time) AS min_time
, MAX(Publication_time) AS max_time
, MIN(price) AS low
, MAX(price) AS high
, C.open_time
FROM stocks S
CROSS APPLY (VALUES (FLOOR((CAST(DATEDIFF(s, Publication_time, GETUTCDATE()) AS BIGINT) * 1000) / (1000 * 60)))) AS C (open_time)
GROUP BY C.open_time
A sub-query will also accomplish the same thing, but isn't as neat (IMO).
Related
I'm having trouble with the SELECT portion of this query. I can calculate the absolute change just fine, but when I want to also find out the percent change I get lost in all the subqueries. Using BigQuery. Thank you!
SELECT
station_name,
ridership_2013,
ridership_2014,
absolute_change_2014 / ridership_2013 * 100 AS percent_change,
(ridership_2014 - ridership_2013) AS absolute_change_2014,
It will probably be beneficial to organize your query with CTEs and descriptive aliases to make things a bit easier. For example...
with
data as (select * from project.dataset.table),
ridership_by_year as (
select
extract(year from ride_date) as yr,
count(*) as rides
from data
group by 1
),
ridership_by_year_and_station as (
select
extract(year from ride_date) as yr,
station_name,
count(*) as rides
from data
group by 1,2
),
yearly_changes as (
select
this_year.yr,
this_year.rides,
prev_year.rides as prev_year_rides,
this_year.rides - coalesce(prev_year.rides,0) as absolute_change_in_rides,
safe_divide( this_year.rides - coalesce(prev_year.rides), prev_year.rides) as relative_change_in_rides
from ridership_by_year this_year
left join ridership_by_year prev_year on this_year.yr = prev_year.yr + 1
),
yearly_station_changes as (
select
this_year.yr,
this_year.station_name,
this_year.rides,
prev_year.rides as prev_year_rides,
this_year.rides - coalesce(prev_year.rides,0) as absolute_change_in_rides,
safe_divide( this_year.rides - coalesce(prev_year.rides), prev_year.rides) as relative_change_in_rides
from ridership_by_year this_year
left join ridership_by_year prev_year on this_year.yr = prev_year.yr + 1
)
select * from yearly_changes
--select * from yearly_station_changes
Yes this is a bit longer, but IMO it is much easier to understand.
I have a query that's intended as the base dataset for an AR Aging report in a BI tool. The report has to be able to show AR as of a given date across a several-month range. I have the logic working, but I'm seeing pretty slow performance. Code below:
WITH
DAT AS (
SELECT
MY_DATE AS_OF_DATE
FROM
NS_REPORTS."PUBLIC".NETSUITE_DATE_TABLE
WHERE
CAST(CAST(MY_DATE AS TIMESTAMP) AS DATE) BETWEEN '2020-01-01' AND CAST(CAST(CURRENT_DATE() AS TIMESTAMP) AS DATE)
), INV AS
(
WITH BASE AS
(
SELECT
BAS1.TRANSACTION_ID
, DAT.AS_OF_DATE
, SUM(BAS1.AMOUNT) ORIG_AMOUNT_BASE
FROM
"PUBLIC".BILL_TRANS_LINES_BASE BAS1
CROSS JOIN DAT
WHERE
BAS1.TRANSACTION_TYPE = 'Invoice'
AND BAS1.TRANSACTION_DATE <= DAT.AS_OF_DATE
--AND BAS1.TRANSACTION_ID = 6114380
GROUP BY
BAS1.TRANSACTION_ID
, DAT.AS_OF_DATE
)
, TAX AS
(
SELECT
TRL1.TRANSACTION_ID
, SUM(TRL1.AMOUNT_TAXED * - 1) ORIG_AMOUNT_TAX
FROM
CONNECTORS.NETSUITE.TRANSACTION_LINES TRL1
WHERE
TRL1.AMOUNT_TAXED IS NOT NULL
AND TRL1.TRANSACTION_ID IN (SELECT TRANSACTION_ID FROM BASE)
GROUP BY
TRL1.TRANSACTION_ID
)
SELECT
BASE.TRANSACTION_ID
, BASE.AS_OF_DATE
, BASE.ORIG_AMOUNT_BASE
, COALESCE(TAX.ORIG_AMOUNT_TAX, 0) ORIG_AMOUNT_TAX
FROM
BASE
LEFT JOIN TAX ON TAX.TRANSACTION_ID = BASE.TRANSACTION_ID
)
SELECT
AR.*
, CASE
WHEN AR.DAYS_OUTSTANDING < 0
THEN 'Current'
WHEN AR.DAYS_OUTSTANDING BETWEEN 0 AND 30
THEN '0 - 30'
WHEN AR.DAYS_OUTSTANDING BETWEEN 31 AND 60
THEN '31 - 60'
WHEN AR.DAYS_OUTSTANDING BETWEEN 61 AND 90
THEN '61 - 90'
WHEN AR.DAYS_OUTSTANDING > 90
THEN '91+'
ELSE NULL
END DO_BUCKET
FROM
(
SELECT
AR1.*
, TRA1.TRANSACTION_TYPE
, DATEDIFF('day', AR1.AS_OF_DATE, CAST(CAST(TRA1.DUE_DATE AS TIMESTAMP) AS DATE)) DAYS_OUTSTANDING
, AR1.ORIG_AMOUNT_BASE + AR1.ORIG_AMOUNT_TAX + AR1.PMT_AMOUNT AMOUNT_OUTSTANDING
FROM
(
SELECT
INV.TRANSACTION_ID
, INV.AS_OF_DATE
, INV.ORIG_AMOUNT_BASE
, INV.ORIG_AMOUNT_TAX
, COALESCE(PMT.PMT_AMOUNT, 0) PMT_AMOUNT
FROM
INV
LEFT JOIN (
SELECT
TLK.ORIGINAL_TRANSACTION_ID
, DAT.AS_OF_DATE
, SUM(TLK.AMOUNT_LINKED * - 1) PMT_AMOUNT
FROM
CONNECTORS.NETSUITE."TRANSACTION_LINKS" AS TLK
CROSS JOIN DAT
WHERE
TLK.LINK_TYPE = 'Payment'
AND CAST(CAST(TLK.ORIGINAL_DATE_POSTED AS TIMESTAMP) AS DATE) <= DAT.AS_OF_DATE
GROUP BY
TLK.ORIGINAL_TRANSACTION_ID
, DAT.AS_OF_DATE
) PMT ON PMT.ORIGINAL_TRANSACTION_ID = INV.TRANSACTION_ID
AND PMT.AS_OF_DATE = INV.AS_OF_DATE
) AR1
JOIN CONNECTORS.NETSUITE."TRANSACTIONS" TRA1 ON TRA1.TRANSACTION_ID = AR1.TRANSACTION_ID
)
AR
WHERE
1 = 1
--AND CAST(AMOUNT_OUTSTANDING AS NUMERIC(15, 2)) > 0
AND AS_OF_DATE >= '2020-04-22'
As you can see, I'm using a date table for the as-of date logic. I think this is the best way to do it, but I welcome any suggestions for better practice.
If I run the query with a single as-of date, it takes 1 min 6 sec and the two main aggregates, on TRANSACTION_LINKS and BILL_TRANS_LINES_BASE, each take about 25% of processing time. I'm not sure why. If I run with the filter shown, >= '2020-04-22', it takes 3 min 33 sec and the aggregates each take about 10% of processing time; they're lower because the ResultWorker takes 63% of processing time to write the results because it's so many rows.
I'm new to Snowflake but not to SQL. My understanding is that Snowflake does not allow manual creation of indexes, but again, I'm happy to be wrong. Please let me know if you have any ideas for improving the performance of this query.
Thanks in advance.
EDIT 1:
Screenshot of most expensive node in query profile
Without seeing the full explain plan and having some sample data to play with it is difficult to give any definitive answers, but here a few thoughts, for what they are worth...
The first are more about readability and may not help performance much:
Don't embed CTEs within each other, just define them in the order that they are needed. There is no need to define BASE and TAX within INV
Use CTEs as much as possible. Your main SELECT statement has 2 other SELECT statements embedded within it. It would be much more readable if these were defined using CTEs
Specific performance issues:
Keep data volumes as low as possible for as long as possible. Your CROSS JOINs obviously create cartesian products that massively increases the volume of data - therefore implement this as late in your SQL as possible rather than right at the start as you have done
While it may make your SQL less readable, use as few SQL statements as possible. For example, you should be able to create your INV CTE with a single SELECT statement rather than the 3 statements/CTEs that you are using
I am a beginner with SQL and looking to write a query to identify users whose first transaction was a successful card payment over an equivalent value of 10 USD (amounts are in different currencies).
This is a theoretical exercise whereby I have the databases in excel but currently cannot access any SQL servers to validate this query.
Firstly I have defined the result set as follows:
SELECT t.user_id, min(t.created_date), (t.amount / fx.rate / Power (10, cd.exponent) AS amount)
FROM transactions AS t
This should yield the user ID, earliest date of transaction and the transaction amount in USD (original transaction converted into USD and converted into a cash amount from an integer amount).
Fairly comfortable with the last formula, just want to make sure the referencing below brings the fx.rate and cd.exponent correctly so it can actually run:
JOIN fx_rates AS fx
ON ( fx.ccy = t.currency
AND fx.base_ccy = 'USD' )
JOIN currency_details AS cd
ON cd.currency = t.currency
The above should ensure the 'amount' column has all the references necessary to be calculated.
Finally I am looking to apply a set of restrictions so data includes only completed card payments over 10 USD:
WHERE t.type='card_payment'
AND t.state='completed'
AND amount>=10
This is the tricky bit as I read that you can't reference an alias ('amount') as it isn't really in the result set but not sure if that applies here.
I have two questions:
1) Would this query produce a list of first transactions which were over 10USD? I don't want it to find when/if the transaction reached that threshold. I am only interested in the first transaction for each user. If the answer is no, would I be better of creating a table with first transactions and filtering on that instead? I honestly thought that's what I'm doing here.
2) Is referencing the alias 'amount' allowed within the query? If not, is another SELECT required here?
Full query
SELECT t.user_id, min(t.created_date), (t.amount / fx.rate / Power (10, cd.exponent) AS amount)
FROM transactions AS t
JOIN fx_rates AS fx
ON ( fx.ccy = t.currency
AND fx.base_ccy = 'USD' )
JOIN currency_details AS cd
ON cd.currency = t.currency
WHERE t.type='card_payment'
AND t.state='completed'
AND amount>=10
-------UPDATE 1-------
Following numerous comments and answers the updated query is as follows:
SELECT t.user_id, t.created_date,
(t.amount / fx.rate / Power(10, cd.exponent)) AS amount
FROM (
SELECT *, Row_Number () OVER
(PARTITION BY t.user_id ORDER BY t.created_date) AS RowNum
FROM transactions AS t)
JOIN fx_rates fx
ON ( fx.ccy = t.currency
AND fx.base_ccy = ‘USD')
JOIN currency_details cd
ON cd.currency = t.currency
WHERE RowNum = 1
AND t.type = ‘card_payment‘
AND t.state = ‘completed‘
AND (t.amount / fx.rate / Power(10, cd.exponent)) >= 10
GROUP BY t.user_id;
The amount column name alias is not available in where condition.
You must repeat the code
SELECT t.user_id, min(t.created_date), (t.amount / fx.rate / Power (10, cd.exponent) ) AS amount
FROM transactions AS t
JOIN fx_rates AS fx
ON ( fx.ccy = t.currency
AND fx.base_ccy = 'USD' )
JOIN currency_details AS cd
ON cd.currency = t.currency
WHERE t.type='card_payment'
AND t.state='completed'
AND (t.amount / fx.rate / Power (10, cd.exponent) )>=10
this because the where clause is evaluated before the select clause so .. in where clause the alias column name is not available .. is used the original (row related) amount column content
"Would this query produce a list of first transactions which were over 10USD?"
No. Besides the fact that what you've written will produce multiple errors, the logic you are attempting here will not limit the results to the first ever transaction, rather it would return the first date of a completed card payment over $10.
"Is referencing the alias 'amount' allowed within the query?"
No. But you can repeat the same part of the select statement.
below is all untested
There are a variety of ways to accomplish what you want. One of the easiest to understand (I think) is to start with a query that orders the transactions by date per user:
select *, row_number() over (partition by user_id order by created_date) rown
from transactions
rown in the above query is the row number for each user (partition by user_id) with the earliest date (order by created_date) being row 1. The output of this query will have multiple rown columns being 1. Each of those rows represents the first ever transaction for that user. You can throw that query into a subquery and select only the rows where rown = 1
select *
from (
select *, row_number() over (partition by user_id order by created_date) rown
from transactions
) t
where t.rown=1
NOW you can add the rest of your stuff (note I don't know what to do with your power() statement)
select t.user_id, t.created_date, t.amount / fx.rate / 10.0 / cd.exponent AS amount
from (
select *, row_number() over (partition by user_id order by created_date) rown
from transactions
) t
inner JOIN fx_rates AS fx
ON fx.ccy = t.currency
AND fx.base_ccy = 'USD'
inner JOIN currency_details AS cd ON cd.currency = t.currency
where t.rown=1
and t.type='card_payment'
AND t.state='completed'
and t.amount / fx.rate / 10.0 / cd.exponent > 10.0
You might want >= 10.0 if you actually want at least $10.
You can also use CTE(Common Table Expressions)
WITH User_CTE (user_id , created_date , amount, RowNum)
AS
(
SELECT user_id , t.created_date , (t.amount / fx.rate / Power(10, cd.exponent)) AS amount
,ROW_NUMBER over (PARTITION BY t.user_id ORDER BY t.created_date asc ) AS RowNum
FROM transactions t
JOIN fx_rates fx
ON ( fx.ccy = t.currency
AND fx.base_ccy = 'USD')
JOIN currency_details cd
ON cd.currency = t.currency
WHERE
t.type = 'card_payment'
AND t.state = 'completed'
AND (t.amount / fx.rate / Power(10, cd.exponent)) >= 10
)
SELECT
user_id
, created_date
, amount
FROM
User_CTE
WHERE
RowNum =1
I have never done SQL before and I been reading up on it. There is a exercise in the book i am reading to get me started, I am also looking up a website called W3School and the book is telling me to attempt the below;
Trades which has the following structure –
trade_id: primary key
timestamp: timestamp of trade
security: underlying security (bought or sold in trade)
quantity: underlyingquantity (positive signifies bought, negative indicates sold)
price:price of 1 security item for this trade
Consider the following table
CREATE TABLE tbProduct
([TRADE_ID] varchar(8), [TIMESTAMP] varchar(8), [SECURITY] varchar(8), [QUANTITY] varchar(8), [PRICE] varchar(8))
;
INSERT INTO tbProduct
([TRADE_ID], [TIMESTAMP], [SECURITY], [QUANTITY], [PRICE])
VALUES
('TRADE1', '10:01:05', 'BP', '+100', '20'),
('TRADE2', '10:01:06', 'BP', '+20', '15'),
('TRADE3', '10:10:00', 'BP', '-100', '19'),
('TRADE4', '10:10:01', 'BP', '-100', '19')
;
In the book it is telling me to write a query to find all trades that happened in the range of 10 seconds and having prices differing by more than 10%.
The result should also list the percentage of price difference between the 2 trades.
For a person who has not done SQL before, reading that has really confused me. They have also provided me the outcome but i am unsure on how they have come to this outcome.
Expected result:
First_Trade Second_Trade PRICE_DIFF
TRADE1 TRADE2 25
I have created a fiddle if this help. If someone could show me how to get the expected result, it will help me understand the book exercise.
Thanks
This will get the result you want.
;with cast_cte
as
(
select [TRADE_ID], cast([TIMESTAMP] as datetime) timestamp, [SECURITY], [QUANTITY], cast([PRICE] as float) as price
from tbProduct
)
select t1.trade_id, t2.trade_id, datediff(ms, t1.timestamp, t2.timestamp) as milliseconds_diff,
((t1.price - t2.price) / t1.price) * 100 as price_diff
from cast_cte t1
inner join cast_cte t2
on datediff(ms, t1.timestamp, t2.timestamp) between 0 and 10000
and t1.trade_id <> t2.trade_id
where ((t1.price - t2.price) / t1.price) * 100 > 10
or ((t1.price - t2.price) / t1.price) * 100 < -10
However, there are a number of problems with the schema and general query parameters:
1) The columns are all varchars. This is very inefficient because they all need to be cast to their appropriate data types in order to get the results you desire. Use datetime, int, float etc. (I have used a CTE to clean up the query as per #Jeroen-Mostert's suggestion)
2) As the table gets larger this query will start performing very poorly as the predicate used (the 10 second timestamp) is not indexed properly.
Slightly different approach to the other answer, but pretty much the same effect. I use 'Between' to find the date range rather than datediff.
select
trade1.trade_ID as TRADE1,
trade2.trade_ID as TRADE2,
(cast(trade1.price as float)-cast(trade2.price as float))/cast(trade1.price as float)*100 as PRICE_DIFF_PERC
from
tbProduct trade1
inner join
tbProduct trade2
on
trade2.timestamp between trade1.timestamp and dateadd(s,10,trade1.TIMESTAMP)
and trade1.TRADE_ID <> trade2.TRADE_ID
where (cast(trade1.price as float)-cast(trade2.price as float))/cast(trade1.price as float) >0.1
The schema could definitely be improved; removing the need for 'CAST's would make this a lot clearer:
CREATE TABLE tbProduct2
([TRADE_ID] varchar(8), [TIMESTAMP] datetime, [SECURITY] varchar(8), [QUANTITY] int, [PRICE] float)
;
Allows you to do:
select *,
trade1.trade_ID as TRADE1,
trade2.trade_ID as TRADE2,
((trade1.price-trade2.price)/trade1.price)*100 as PRICE_DIFF_PERC
from
tbProduct2 trade1
inner join
tbProduct2 trade2
on
trade2.timestamp between trade1.timestamp and dateadd(s,10,trade1.TIMESTAMP)
and trade1.TRADE_ID <> trade2.TRADE_ID
where (trade1.price-trade2.price) /trade1.price >0.1
;
have used lead function to gain expected result. try this :
select
iq.trade_id as FIRST_TRADE,
t1 as SECOND_TRADE,
((price-t3)/price*100) as PRICE_DIFF
from
(
Select trade_id, timestamp, security, quantity, cast(price as float) price,
lead(trade_id) over (partition by security order by timestamp) t1
,lead(timestamp) over (partition by security order by timestamp) t2
,lead(cast(price as float)) over (partition by security order by timestamp) t3
from tbProduct
) iq
where DATEDIFF(SECOND, iq.timestamp,iq.t2) between 0 and 10
and ((price-t3)/price*100) > 10
It is based on fact that partition is done over security. Feel free to comment or suggest corrections.
I have a MS SQL table that contains stock data with the following columns: Id, Symbol, Date, Open, High, Low, Close.
I would like to self-join the table, so I can get a day-to-day % change for Close.
I must create a query that will join the table with itself in a way that every record contains also the data from the previous session (be aware, that I cannot use yesterday's date).
My idea is to do something like this:
select * from quotes t1
inner join quotes t2
on t1.symbol = t2.symbol and
t2.date = (select max(date) from quotes where symbol = t1.symbol and date < t1.date)
However I do not know if that's the correct/fastest way. What should I take into account when thinking about performance? (E.g. will putting UNIQUE index on a (Symbol, Date) pair improve performance?)
There will be around 100,000 new records every year in this table. I am using MS SQL Server 2008
One option is to use a recursive cte (if I'm understanding your requirements correctly):
WITH RNCTE AS (
SELECT *, ROW_NUMBER() OVER (PARTITION BY symbol ORDER BY date) rn
FROM quotes
),
CTE AS (
SELECT symbol, date, rn, cast(0 as decimal(10,2)) perc, closed
FROM RNCTE
WHERE rn = 1
UNION ALL
SELECT r.symbol, r.date, r.rn, cast(c.closed/r.closed as decimal(10,2)) perc, r.closed
FROM CTE c
JOIN RNCTE r on c.symbol = r.symbol AND c.rn+1 = r.rn
)
SELECT * FROM CTE
ORDER BY symbol, date
SQL Fiddle Demo
If you need a running total for each symbol to use as the percentage change, then easy enough to add an additional column for that amount -- wasn't completely sure what your intentions were, so the above just divides the current closed amount by the previous closed amount.
Something like this w'd work in SQLite:
SELECT ..
FROM quotes t1, quotes t2
WHERE t1.symbol = t2.symbol
AND t1.date < t2.date
GROUP BY t2.ID
HAVING t2.date = MIN(t2.date)
Given SQLite is a simplest of a kind, maybe in MSSQL this will also work with minimal changes.
Index on (symbol, date)
SELECT *
FROM quotes q_curr
CROSS APPLY (
SELECT TOP(1) *
FROM quotes
WHERE symbol = q_curr.symbol
AND date < q_curr.date
ORDER BY date DESC
) q_prev
You do something like this:
with OrderedQuotes as
(
select
row_number() over(order by Symbol, Date) RowNum,
ID,
Symbol,
Date,
Open,
High,
Low,
Close
from Quotes
)
select
a.Symbol,
a.Date,
a.Open,
a.High,
a.Low,
a.Close,
a.Date PrevDate,
a.Open PrevOpen,
a.High PrevHigh,
a.Low PrevLow,
a.Close PrevClose,
b.Close-a.Close/a.Close PctChange
from OrderedQuotes a
join OrderedQuotes b on a.Symbol = b.Symbol and a.RowNum = b.RowNum + 1
If you change the last join to a left join you get a row for the first date for each symbol, not sure if you need that.
You can use option with CTE and ROW_NUMBER ranking function
;WITH cte AS
(
SELECT symbol, date, [Open], [High], [Low], [Close],
ROW_NUMBER() OVER(PARTITION BY symbol ORDER BY date) AS Id
FROM quotes
)
SELECT c1.Id, c1.symbol, c1.date, c1.[Open], c1.[High], c1.[Low], c1.[Close],
ISNULL(c2.[Close] / c1.[Close], 0) AS perc
FROM cte c1 LEFT JOIN cte c2 ON c1.symbol = c2.symbol AND c1.Id = c2.Id + 1
ORDER BY c1.symbol, c1.date
For improving performance(avoiding sorting and RID Lookup) use this index
CREATE INDEX ix_symbol$date_quotes ON quotes(symbol, date) INCLUDE([Open], [High], [Low], [Close])
Simple demo on SQLFiddle
What you had is fine. I don't know if translating the sub-query into the join will help. However, you asked for it, so the way to do it might be to join the table to itself once more.
select *
from quotes t1
inner join quotes t2
on t1.symbol = t2.symbol and t1.date > t2.date
left outer join quotes t3
on t2.symbol = t3.symbol and t2.date > t3.date
where t3.date is null
You could do something like this:
DECLARE #Today DATETIME
SELECT #Today = DATEADD(DAY, 0, DATEDIFF(DAY, 0, CURRENT_TIMESTAMP))
;WITH today AS
(
SELECT Id ,
Symbol ,
Date ,
[OPEN] ,
High ,
LOW ,
[CLOSE],
DATEADD(DAY, -1, Date) AS yesterday
FROM quotes
WHERE date = #today
)
SELECT *
FROM today
LEFT JOIN quotes yesterday ON today.Symbol = yesterday.Symbol
AND today.yesterday = yesterday.Date
That way you limit your "today" results, if that's an option.
EDIT: The CTEs listed as other questions may work well, but I tend to be hesitant to use ROW_NUMBER when dealing with 100K rows or more. If the previous day may not always be yesterday, I tend to prefer to pull out the check for the previous day in its own query then use it for reference:
DECLARE #Today DATETIME, #PreviousDay DATETIME
SELECT #Today = DATEADD(DAY, 0, DATEDIFF(DAY, 0, CURRENT_TIMESTAMP));
SELECT #PreviousDay = MAX(Date) FROM quotes WHERE Date < #Today;
WITH today AS
(
SELECT Id ,
Symbol ,
Date ,
[OPEN] ,
High ,
LOW ,
[CLOSE]
FROM quotes
WHERE date = #today
)
SELECT *
FROM today
LEFT JOIN quotes AS previousday
ON today.Symbol = previousday.Symbol
AND previousday.Date = #PreviousDay