How to get cost fall between 2 dates in another table? - sql

I have 2 tables, in one table i am saving cost based on different date batch and in another table i am saving the order date. What i need is to get cost from the table where order date falls between them.
create table #TempDateBatch
(
Sku nvarchar(50),
FromDate datetime,
ToDate datetime,
Cost decimal(12,2)
)
Insert into #TempDateBatch(Sku,FromDate,ToDate,Cost) values('Abc','2020-05-01','2020-05-31',12.3);
Insert into #TempDateBatch(Sku,FromDate,ToDate,Cost) values('Abc','2020-06-01','2020-06-05',10.3);
Create table #TempMain
(
Sku nvarchar(50),
ProductName nvarchar(50),
OrderDate datetime
)
Insert into #TempMain(Sku,ProductName,OrderDate) values('Abc','Demo Prdouct','2020-05-10');
Insert into #TempMain(Sku,ProductName,OrderDate) values('Abc','Demo Prdouct','2020-06-03');
I need to get Sku and Order date from one table and i need to pick cost from another table on the date which that order falls.
Example Something like this -:

I think you just want a join:
select m.*, db.cost
from #TempMain m left join
#TempDateBatch db
on m.sku = db.sku and
m.orderdate between db.fromdate and db.todate

Related

How to Fetch records based on date range in another table

I am having issue getting the correct data out of this query.
I am trying to fetch ClientID from TableA by comparing it with TableB based on Date range in TableB
Scenario:
Table A service data
Table B outcome data
Report Requirement – List of the Clients and the total number of clients that have a date entry in Table A but missing entry for that Client in Table B in the date range within the previous six months or in the next 45 days of that date.
Two main field for comparison in the table are ClientID & Date and I am using the below query to get those client IDs from Table A
If #temp NOT NULL
DROP TABLE #TEMP
CREATE TABLE #Temp
(ClientID int,
Start_dt date,
end_dt date)
Insert into #Temp
select ClientID,
DATEADD(DAY, -180, CAST(a.Date AS date)),
DATEADD(DAY, 45, CAST(a.Date AS date))
FROM TableA a
SELECT DISTINCT B.ClientID,B.Date
FROM Table_B b
LEFT JOIN #temp x
ON b.ClientID = x.ClientID
WHERE CAST(b.Date AS date)< x.start_dt
or CAST(b.Date AS date)> x.end_dt
P.S. I am using all the cast as the dates are in Varchar format as this table is created and populated from Azure.
Could be something really simple but is not striking,
Thanks heaps.

Join Two Temp Table With Full Outer Join

I want join two temp tables with Full outer join but doesn't work properly
and allways just show #RMS values without #RMB !!
where and what's wrong in this code?
( #RMS without null )
create table #RMS
(
[Year] int,
[Month] int,
sTAccount bigint,
sRemaining bigint
)
insert into #RMS(Year,Month,sTAccount,sRemaining)
select
YEAR(Date) [Year],
DATEPART(MONTH,Date) [Month],
sum(TAccount) sTAccount,
sum(Remaining) sRemaining
from
SaleInvoices
group by YEAR(Date),DATEPART(MONTH,Date)
order by YEAR(Date),DATEPART(MONTH,Date)
( #RMB without null but sometimes #RMB Month Column Value and #RMS Month Column value is Different )
create table #RMB
(
[Year] int,
[Month] int,
bTAccount bigint,
bRemaining bigint
)
insert into #RMB(Year,Month,bTAccount,bRemaining)
select
YEAR(Date) [Year],
DATEPART(MONTH,Date) [Month],
sum(TAccount) bTAccount,
sum(Remaining) bRemaining
from
BuyInvoices
group by YEAR(Date),DATEPART(MONTH,Date)
order by YEAR(Date),DATEPART(MONTH,Date)
select * from #RMS
Full Outer Join #RMB
on #RMS.Year=#RMB.Year and #RMS.Month=#RMB.Month
group by #RMS.Year, #RMS.Month
order by #RMS.Year, #RMS.Month
Thanks For Your Answers
You have the wrong SELECT list. Replace * with #RMS.*, #RMB.* or (better) an explicit list of the fields you want, prefixing their names with the name of the table they come from. This also allows not to repeat the fields you've joined on.

Here is one possible way to find random rows in a table? Is there a better method?

We have these four tables:
Store (
row bigint,
id uniqueidentifier,
name varchar
)
Products (
row bigint,
id uniqueidentifier,
storeID uniqueidentifier,
productname varchar
)
Customer (
row bigint,
id uniqueidentifier,
storeID uniqueidentifier,
fName,
lName,
email
)
orders (
row bigint,
id uniqueidentifier,
store_ID uniqueidentifier,
cust_id uniqueidentifier,
prod_id uniqueidentifier,
date datetime
)
We need to find 30 random rows in the orders table for a particular store.
Here is my first try at it:
select TOP 30 * from orders o inner join store s on o.Retailer_ID=s.ID
where s.Name='XXXX' and o.Row in (select ABS(CAST(CAST(NEWID() AS
VARBINARY) AS int)) %100000 from orders) and o.Retailer_ID =(select ID
from store s where s.Name= 'XXXX')
But I'm not real happy with the results because I feel that the range of rows will never be single digits and the random calculation that I do just doesn't seem that great at creating a real random number from row 1 to row-max-number. I'm not exactly sure how many rows are in the orders table in the first place which could be another issue.
Is there a better method to finding random rows in a table?
Things are much simpler.
select TOP 30 *
from orders o
join store s on o.Retailer_ID=s.ID
where s.Name='XXXX'
order by newid()
You can also do it this way:
select TOP 30 *
from orders o inner join
store s
on o.Retailer_ID = s.ID
where s.Name='XXXX'
order by newid();
This is returning random rows by randomly sorting the data and then choosing the top 30 rows. In SQL Server, ordering by newid() is a way to randomly sort the data.

How to generate sum report from joining of 2 tables where the join key is not unique

I have 2 tables that collect records of event on points.
CREATE TABLE report_one
(
date timestamp,
point_id bigint,
income int
)
CREATE TABLE report_two
(
date timestamp,
point_id bigint,
spent int
)
I want to generate a sum report (and addiitonal reports). i want to use join because i need to support pagination, ordering ...
The problem is that the join key (point id for the report) is not 1:1 , so i got the same row more than one.
insert into report_one values('2013-1-1',1,1)
insert into report_two values('2013-1-1',1,1)
insert into report_two values('2013-1-2',1,1)
select * from report_one r1 left join report_two r2 on r1.point_id = r2.point_id
will have 2 rows of table report_one ,but for total i need only one.
I want to be able to create a view of some kind of join between the tables, where each row will be only once.
**I want output like this:
1 (pid) , 1,1,0,0 - this from report_one
1 (pid) ,0,0,1,1 -- this from report_two
1 (pid) ,0,0,1,1 -- this from report_two **
Union all can be great , but i dont have the same columns types in the two tables.
Ps . The real table has lots of column and pk is more than one column , i just make is simple for the question
Why not try the following.
CREATE TABLE report
(
report_id bigint,
date varchar(20),
point_id bigint,
amount int,
amount_type varchar(20)
);
THEN
insert into report values (1,'2013-01-01',1,1,'income');
insert into report values (2,'2013-01-01',1,1,'expense');
insert into report values (2,'2013-01-02',1,1,'expense');
Finally
SELECT report_id,amount_type,SUM(point_id) FROM report GROUP BY report_id,amount_type
The output will sum point_id per report/amount_type then it will be easier to draw stats per date range, etc and overhead by create table and joins will also be minimized.
The output: SQL Fiddle Demo
I think that this can work for me:
select date d1,point_id p1,0 income ,spent spent from report_one
union ALL
select date d2,point_id p2,income,0 spent from report_two
I dont have to have the zero . I added them for demo a case that they the columns are not from the same type
You could group tables by point_id first, choosing more appropriate aggregate functions for the fields needed and then join to each other:
select r1.point_id, r1.date, r1.income, r2.spent
from
(
select point_id, max(date) date, sum(income) income
from report_one
group by point_id
) r1
inner join
(
select point_id, max(date) date, sum(spent) spent
from report_two
group by point_id
) r2 on r1.point_id = r2.point_id
Also, UNION way:
select point_id, date, income sum, 1 is_income
from report_one
union all
select point_id, date, spent sum, 0 is_income
from report_two

deleting duplicate row with no unique identifier

I have some data in a table that looks roughly like the following:
table stockData
(
tickId int not null,
timestamp datetime not null,
price decimal(18,5) not null
)
Neither tickId nor timestamp are unique, however the combination of tickId and timestamp is supposed to be unique.
I have some duplicate data in my table, and I'm attempting to remove it. However, I'm coming to the conclusion that there is not enough information with the given data for me to discern one row from the other, and basically no way for me to delete just one of the duplicate rows. My guess is that I will need to introduce some sort of identity column, which would help me to identify one row from the other.
Is this correct, or is there some magic way of deleting one but not both of the duplicate data with a query?
EDIT Edited to clarify that tickId and timestamp combo should be unique, but it's not because of the duplicate data.
Here is a query that will remove duplicates and leave exactly one copy of each unique row. It will work with SQL Server 2005 or higher:
WITH Dups AS
(
SELECT tickId, timestamp, price,
ROW_NUMBER() OVER(PARTITION BY tickid, timestamp ORDER BY (SELECT 0)) AS rn
FROM stockData
)
DELETE FROM Dups WHERE rn > 1
select distinct * into temp_table from source_table (this table will be created for you)
delete from temp_table (what you don't need)
insert into sorce_table
select * from temp_table
Maybe I'm not understanding your question correctly, but if "tickId" and "timestamp" are guaranteed to be unique then how do you have duplicate data in your table? Could you provide an example or two of what you mean?
However, if you have duplicates of all three columns inside the table the following script may work. Please test this and make a backup of the database before implementing as I just put it together.
declare #x table
(
tickId int not null,
timestamp datetime not null,
price decimal(18,5) not null
)
insert into #x (tickId, timestamp, price)
select tickId,
timestamp,
price
from stockData
group by tickId,
timestamp,
price
having count(*) > 1
union
select tickId,
timestamp,
price
from stockData
group by tickId,
timestamp,
price
having count(*) = 1
delete
from stockData
insert into stockData (tickId, timestamp, price)
select tickId,
timestamp,
price
from #x
alter table stockData add constraint
pk_StockData primary key clustered (tickid, timestamp)