I am updating a product bin location in our database using the reference text from a stock count we recently did. Some of the products have multiple bin locations, so I am using this SQL to pull all the bin locations into a table:
SELECT
SCT.ProductID,
STUFF ((SELECT ',' + ReferenceText
FROM StockCounttickets SCT2
WHERE SCT2.ProductId = SCT.ProductID
AND SCT2.StockCountID = '10873'
ORDER BY ProductID
FOR XML PATH(''), TYPE).value('.', 'varchar(max)'), 1, 1, '') AS Products
FROM
StockCountTickets SCT
WHERE
SCT.StockCountID = '10873'
GROUP BY
ProductID ;
I am getting the result I want, except for some of the products were counted twice in the same place, so some of the results have the same value twice
Is there any way to get rid of the duplicates?
Presumably, you mean the duplicates in the concatenated list.
Well, you can just remove the duplicates using select distinct or group by:
SELECT SCT.ProductID,
STUFF( (SELECT DISTINCT ',' + SCT2.ReferenceText
FROM StockCounttickets SCT2
WHERE SCT2.ProductId = SCT.ProductID AND
SCT2.StockCountID = 10873
ORDER BY SCT2.ProductID
FOR XML PATH(''), TYPE).value('.', 'varchar(max)'
), 1, 1, '') AS Products
FROM (SELECT DISTINCT SCT.ProductId
FROM StockCountTickets SCT
WHERE SCT.StockCountID = 10873
) SCT;
Notes:
In a correlated subquery, you should qualify all column references, always, to avoid errors.
Assuming StockCountID is a number, don't use quotes for the comparison.
I prefer to generate the distinct rows in the subquery. I believe this is a performance gain.
Related
I found the perfect example while browsing through sites of what I'm looking for. In this code example, all country names that appear in long formatted rows are concatenated together into one result, with a comma and space between each country.
Select CountryName from Application.Countries;
Select SUBSTRING(
(
SELECT ',' + CountryName AS 'data()'
FROM Application.Countries FOR XML PATH('')
), 2 , 9999) As Countries
Source: https://www.mytecbits.com/microsoft/sql-server/concatenate-multiple-rows-into-single-string
My question is: how can you partition these results with a second column that would read as "Continent" in such a way that each country would appear within its respective continent? The theoretical "OVER (PARTITION BY Continent)" in this example would not work without an aggregate function before it. Perhaps there is a better way to accomplish this? Thanks.
Use a continents table (you seem not to have one, so derive one with distinct), and then use the same code in a cross apply using the where as a "join" condition:
select *
from
(
select distinct continent from Application.Countries
) t1
cross apply
(
Select SUBSTRING(
(
SELECT ',' + CountryName AS 'data()'
FROM Application.Countries as c FOR XML PATH('')
where c.continent=t1.continent
), 2 , 9999) As Countries
) t2
Note that it is more usual, and arguably has more finesse, to use stuff(x,1,1,'')instead of substring(x,2,9999) to remove the first comma.
I want to combine the Currency field by comparing Config and Product Column. If both field is repeated with duplicate values but different currency, the combine the currency into single row as you see in the screenshot.
I tried the code like
Select DISTINCT LC.Config, LC.Product, CONCAT(LC.Currency,',',RC.Currency) as Currencies FROM [t_LimitCurrency] LC INNER JOIN [t_LimitCurrency] RC ON LC.[Config] = RC.[Config] AND LC.Product = RC.Product
Please let me know, how to write select statement for this scenario.
Below Code should do the trick. I am using XML Path but you can use String_AGG in latest version of sql server
select distinct Config,Product,
STUFF((SELECT ' ,' + CAST(Currency AS VARCHAR(max)) [text()]
FROM (
SELECT Currency
FROM Yourtable b
WHERE a.Config=b.Config and a.product=b.product
) ap
FOR XML PATH(''), TYPE)
.value('.','NVARCHAR(MAX)'),1,2,' ') Currency
from Yourtable a
EDIT 1 : for latest version of sql server code should be like below
select distinct Config,Product,
(SELECT
STRING_AGG(CONVERT(NVARCHAR(max),Currency), ',')
FROM YourTable b WHERE a.Config=b.Config and a.product=b.product)
Currency
from Yourtable a
Hey all I have a question about combining like IDs that also have a XML column.
My data I'm trying to combine:
_ID _xml _indivisualCommaList _eachIndividual
------ ------------------------------------------------------------------------------------------------- ----------------------- ---------------
46589 <Individual><TBS>768-hER-382</TBS><Categories /><TBS2>768-hER-382,908-YTY-354</TBS2></Individual> 768-hER-382,908-YTY-354 768-hER-382
46589 <Individual><TBS>768-hER-382</TBS><Categories /><TBS2>768-hER-382,908-YTY-354</TBS2></Individual> 768-hER-382,908-YTY-354 908-YTY-354
Where
_ID = INT
_xml = XML
_indivisualCommaList = VARCHAR(MAX)
_eachIndividual = VARCHAR(MAX)
Pretty (easier to read) XML from above:
<Individual>
<TBS>768-hER-382</TBS>
<Categories />
<TBS2>768-hER-382,908-YTY-354</TBS2>
</Individual>
<Individual>
<TBS>768-hER-382</TBS>
<Categories />
<TBS2>768-hER-382,908-YTY-354</TBS2>
</Individual>
The XML, ID and _indivisualCommaList will always be the same no matter how many rows return back. The only unique column would be the _eachIndividual.
So I try the following query to group like IDs together
SELECT
*
FROM
#tblData
WHERE
_ID = #AssetID
GROUP BY
_ID
Naturally, because of my XML column, I get the error of:
Column '#tblData._xml' is invalid in the select list because it is not
contained in either an aggregate function or the GROUP BY clause.
So I'm really not sure what I can do in order to combine these rows?
The end result I am looking to have is:
_ID _xml _indivisualCommaList _eachIndividual
------ ------------------------------------------------------------------------------------------------- ----------------------- -----------------------
46589 <Individual><TBS>768-hER-382</TBS><Categories /><TBS2>768-hER-382,908-YTY-354</TBS2></Individual> 768-hER-382,908-YTY-354 768-hER-382,908-YTY-354
SO, is this possible to do?
A solution (with horrible performance) without string_agg should be:
SELECT
dataA._id,
dataA._xml,
dataA._individualCommaList,
CONCAT(dataA._eachIndividual,',',dataB._eachIndividual) as _eachIndividual
FROM data dataA
JOIN data dataB ON dataA._id = dataB._id AND dataA._eachIndividual != dataB._eachIndividual
WHERE dataA._individualCommaList = CONCAT(dataA._eachIndividual,',',dataB._eachIndividual)
db<>fiddle
JOIN the table onto itself to get the necessary data into one row, but only join different indivduals.
The WHERE Clauses ensures that the record with the correct order is kept.
Alternativley you could use an LIKE to keep the row from the first(?) indivdual in the list.
If I've got it right and for a given _ID only _eachIndividual is varying
select top(1) with ties t._ID, t._xml, t._indivisualCommaList, t2.x as _eachIndividual
from tbl t
join (select _ID, string_agg(_eachIndividual, ',') x
from tbl
group by _ID) t2 on t._ID = t2._ID
order by row_number() over(partition by t._ID order by t._ID)
Using for xml path aggregation in older versions
select top(1) with ties t._ID, t._xml, t._indivisualCommaList,
stuff((select ',' + t2._eachIndividual
from tbl t2
where t2._ID = t._ID
for xml path ('')),
1,1, '') _eachIndividual
from tbl t
order by row_number() over(partition by t._ID order by t._ID)
I have a SQL database and I am writing a query:
SELECT *
FROM Consignments
INNER JOIN OrderDetail
ON Consignments.consignment_id = OrderDetail.consignment_id
INNER JOIN UserReferences
ON OrderDetail.record_id = UserReferences.record_id
WHERE Consignments.despatch_date = '2020-04-23'
Within the first column is:consignment_id [this is from the Consignments table]In the final column is:senders_reference [this is from the UserReferences table]
Now - the issue I have is - that when I am running the query to pick up all consignments for a particular date - it is displaying multiple rows (with duplicated consignment_id) when there are multiple senders references within the database.
If there is one senders reference number - then there is only 1 row.
This makes sense - because within the front-end for the database the user can enter 1 or more senders references.
Now - what I would like to do is to amend my query for the resulting data to only display 1 row for all consignments and if there are multiple senders reference numbers - to have them within the one field, separated by commas.
Is this doable from the query stage?
Or if not - after export, is it possible to develop a bat file to do the same thing?
For reference - this is what I mean - this is the result I am getting at the moment:
This is what I need:
You can use older style with the help of for xml :
select t.consignment_id,
stuff((select ', ' +convert(varchar(255), t1.sender_reference)
from table t1
where t1.consignment_id = t.consignment_id
for xml path('')
), 1, 1, ''
) as senders_reference
from (select distinct consignment_id from table t) t;
Edit : You can use CTE :
with cte as (
<your query>
)
select t.consignment_id,
stuff((select ', ' +convert(varchar(255), t1.sender_reference)
from cte t1
where t1.consignment_id = t.consignment_id
for xml path('')
), 1, 1, ''
) as senders_reference
from (select distinct consignment_id from cte t) t;
You seem to want to use the STRING_AGG function.
This answer covers it nicely
ListAGG in SQLSERVER
I want to have a SELECT that as part of list of return fields would also include a csv of multiple values from another table.
I have code working to determine the CSV from here:
SQL Server 2008 Rows to 1 CSV field
But i am not able to get a working JOIN with the csv value, or get at the id field of the derived csv values.
My two tables are:
job (id,title,location)
skills(id, jobID, skillName, listOrder)
I want to do a select * from job with each job record having its own derived skills
basically result should be:
1,dba,texas, [sql,xml,rdp] <-- skills for jobid=1
This seems to do, what you are looking for:
SELECT
J.id
, J.title
, J.location
, '[' + STUFF ((SELECT ',' + S.skillName
FROM Skills S
WHERE
J.id = S.jobID
ORDER BY S.listOrder
FOR XML PATH(''), TYPE).value('.', 'VARCHAR(MAX)')
, 1, 1, '') + ']' skillSet
FROM Job J
GROUP BY J.id, J.title, J.location
;
See it in action: SQL Fiddle
Please comment, if and as this requires adjustment / further detail.