I have a table tbl_Country, which contains columns called ID and Name. The Name column has multiple country names separated by comma, I want the id when I pass multiple country names to compare with Name column values. I am splitting the country names using a function - the sample query looks like this:
#country varchar(50)
SELECT *
FROM tbl_Country
WHERE (SELECT *
FROM Function(#Country)) IN (SELECT *
FROM Function(Name))
tbl_country
ID Name
1 'IN,US,UK,SL,NZ'
2 'IN,PK,SA'
3 'CH,JP'
parameter #country ='IN,SA'
i have to get
ID
1
2
NOTE: The Function will split the string into a datatable
Try this
SELECT * FROM tbl_Country C
LEFT JOIN tbl_Country C1 ON C1.Name=C.Country
Try this:
SELECT *
FROM tbl_Country C
WHERE ',' + #country + ',' LIKE '%,' + C.Name + ',%';
Basically, by specifying multiple values in a single column, you are violating the 1st NF. Therefore, the following might not be a good approach but provides the solution that you are looking for:
declare #country varchar(50)= 'IN,SA'
declare #counterend int
declare #counterstart int =1
declare #singleCountry varchar(10)
set #counterend = (select COUNT(*) from fnSplitStringList(#country))
create table #temp10(
id int
,name varchar(50))
while #counterstart<= #counterend
begin
;with cte as (
select stringliteral country
, ROW_NUMBER() over (order by stringliteral) countryseq
from fnSplitStringList(#country))
select #singleCountry = (select country FROM cte where countryseq=#counterstart)
insert into #temp10(id, name)
select * from tbl_country t1
where not exists (select id from #temp10 t2 where t1.id=t2.id)
and name like '%' + #singleCountry +'%'
set #counterstart= #counterstart+1
end
select * from #temp10
begin drop table #temp10 end
How it works: It splits the passed string and ranks it. Afterwards, it loops through all the records for every single Value(country) produced and inserts them into temptable.
try this,
select a.id FROM tbl_Country a inner join
(SELECT country FROM dbo.Function(#Country)) b on a.name=b.country
Related
I have a function for checking if certain tables exist in my database, using part of the table name as a key to match (my table naming conventions include unique table name prefixes). It uses a select statement as below, where #TablePrefix is a parameter to the function and contains the first few characters of the table name:
DECLARE #R bit;
SELECT #R = COUNT(X.X)
FROM (
SELECT TOP(1) 1 X FROM sys.tables WHERE [name] LIKE #TablePrefix + '%'
) AS X;
RETURN #R;
My question is, how can I extend this function to work for #temp tables too?
I have tried checking the first char of the name for # then using the same logic to select from tempdb.sys.tables, but this seems to have a fatal flaw - it returns a positive result when any temp table exists with a matching name, even if not created by the current session - and even if created by SPs in a different database. There does not seem to be any straightforward way to narrow the selection down to only those temp tables that exist in the context of the current session.
I cannot use the other method that seems universally to be suggested for checking temp tables - IF OBJECT('tempdb..#temp1') IS NOT NULL - because that requires me to know the full name of the table, not just a prefix.
create table #abc(id bit);
create table #abc_(id bit);
create table #def__(id bit);
create table #xyz___________(id bit);
go
select distinct (left(t.name, n.r)) as tblname
from tempdb.sys.tables as t with(nolock)
cross join (select top(116) row_number() over(order by(select null)) as r from sys.all_objects with(nolock)) as n
where t.name like '#%'
and object_id('tempdb..'+left(t.name, n.r)) is not null;
drop table #abc;
drop table #abc_;
drop table #def__;
drop table #xyz___________;
Try something like this:
DECLARE #TablePrefix VARCHAR(50) = '#temp';
DECLARE #R BIT, #pre VARCHAR(50) = #TablePrefix + '%';
SELECT #R = CASE LEFT ( #pre, 1 )
WHEN '#' THEN (
SELECT CASE WHEN EXISTS ( SELECT * FROM tempdb.sys.tables WHERE [name] LIKE #pre ) THEN 1
ELSE 0
END )
ELSE (
SELECT CASE WHEN EXISTS ( SELECT * FROM sys.tables WHERE [name] LIKE #pre ) THEN 1
ELSE 0
END )
END;
SELECT #R AS TableExists;
I have a table which has the following values:
ID | Name
---------------
1 | Anavaras
2 | Lamurep
I need a query which outputs the value which doesn't have entry in the table.
For e.g:
If my where clause contains id in('1','2','3','4'), should produce output has
3 |
4 |
for the above entries in the table.
You would put this into a "derived table" and use left join or a similar construct:
select v.id
from (values(1), (2), (3), (4)) v(id) left join
t
on t.id = v.id
where t.id is null;
Something like this:
"SELECT id FROM table WHERE name IS NULL"
I'd assume?
First you need to split your in to a table. Sample split function is here:
CREATE FUNCTION [dbo].[split]
(
#str varchar(max),
#sep char
)
RETURNS
#ids TABLE
(
id varchar(20)
)
AS
BEGIN
declare #pos int,#id varchar(20)
while len(#str)>0
begin
select #pos = charindex(#sep,#str + #sep)
select #id = LEFT(#str,#pos),#str = SUBSTRING(#str,#pos+1,10000000)
insert #ids(id) values(#id)
end
RETURN
END
Then you can use this function.
select id from dbo.split('1,2,3,4,5',',') ids
left join myTable t on t.id=ids.id
where t.id is null
-- if table ID is varchar then '''1'',''2'',''3'''
My dbo.Report table has a column called Name. I need to somehow select the Name column in my sub select. How can I get the Name values from the sub select as well? Once I have those I need to be able to run another select query such as this:
SELECT * FROM MyOtherTable WHERE Name = #pName
where #pName being a newly created variable with values from the sub-select possibly?? I'm not sure how that works. Or something like this:
SELECT * FROM MyOtherTable WHERE Name IN (the values from my sub select go here)
PROC:
SELECT ListingInfo,
COALESCE((SELECT SUM(ListingViews)
FROM dbo.Report
WHERE (ID = #pID)
AND (DateEntered BETWEEN DATEADD(MONTH, 0, #pFromDate) AND DATEADD(MONTH, 1, #pToDate)-1)
GROUP BY ID), 0) AS 'Views'
FROM dbo.Reporting r
INNER JOIN dbo.Listings l ON (r.ID = l.ID)
WHERE (r.ID = #pID)
AND l.TypeCode = 20
You can use a table variable to store your names.
DECLARE #Names TABLE
(
Name varchar(250),
SomeInt int
)
INSERT INTO #Names (Name, SomeInt)
SELECT Name, sum(ListingViews)
FROM WhateverTable
GROUP BY Name
SELECT * FROM OtherTable WHERE Name IN (SELECT Name FROM #Names)
You can go on to use #Names as if it were any other table, and if you use it in a stored procedure, it will automatically handle the clean up of the table variable when it ends.
Something like this?
Or what else do you need?
Didn't understand it at all, sorry.
SELECT *
FROM MyOtherTable
WHERE Name IN (SELECT name FROM Table WHERE Name = #pName)
Edit:
Something like this?
SELECT COUNT(*), name
FROM MyOtherTable O, Table T
WHERE O.Name = T.NAME
AND T.NAME = #pName
group by name
I want to compare the individual words from the user input to individual words from a column in my table.
For example, consider these rows in my table:
ID Name
1 Jack Nicholson
2 Henry Jack Blueberry
3 Pontiac Riddleson Jack
Consider that the user's input is 'Pontiac Jack'. I want to assign weights/ranks for each match, so I can't use a blanket LIKE (WHERE Name LIKE #SearchString).
If Pontiac is present in any row, I want to award it 10 points. Each match for Jack gets another 10 points, etc. So row 3 would get 20 points, and rows 1 and 2 get 10.
I have split the user input into individual words, and stored them into a temporary table #SearchWords(Word).
But I can't figure out a way to have a SELECT statement that allows me to combine this. Maybe I'm going about this the wrong way?
Cheers,
WT
For SQL Server, try this:
SELECT Word, COUNT(Word) * 10 AS WordCount
FROM SourceTable
INNER JOIN SearchWords ON CHARINDEX(SearchWords.Word, SourceTable.Name) > 0
GROUP BY Word
What about this? (this is MySQL syntax, I think you only have to replace the CONCAT and do it with +)
SELECT names.id, count(searchwords.word) FROM names, searchwords WHERE names.name LIKE CONCAT('%', searchwords.word, '%') GROUP BY names.id
Then you would have a SQL result with the ID of the names-table and count of the words that match to that id.
You could do it via a common table expression that works out the weighting. For example:
--** Set up the example tables and data
DECLARE #Name TABLE (id INT IDENTITY, name VARCHAR(50));
DECLARE #SearchWords TABLE (word VARCHAR(50));
INSERT INTO #Name
(name)
VALUES ('Jack Nicholson')
,('Henry Jack Blueberry')
,('Pontiac Riddleson Jack')
,('Fred Bloggs');
INSERT INTO #SearchWords
(word)
VALUES ('Jack')
,('Pontiac');
--** Example SELECT with #Name selected and ordered by words in #SearchWords
WITH Order_CTE (weighting, id)
AS (
SELECT COUNT(*) AS weighting
, id
FROM #Name AS n
JOIN #SearchWords AS sw
ON n.name LIKE '%' + sw.word + '%'
GROUP BY id
)
SELECT n.name
, cte.weighting
FROM #Name AS n
JOIN Order_CTE AS cte
ON n.id = cte.id
ORDER BY cte.weighting DESC;
Using this technique, you can also apply a value to each search word if you wanted to. So you could make Jack more valueable than Pontiac. This would look something like this:
--** Set up the example tables and data
DECLARE #Name TABLE (id INT IDENTITY, name VARCHAR(50));
DECLARE #SearchWords TABLE (word VARCHAR(50), value INT);
INSERT INTO #Name
(name)
VALUES ('Jack Nicholson')
,('Henry Jack Blueberry')
,('Pontiac Riddleson Jack')
,('Fred Bloggs');
--** Set up search words with associated value
INSERT INTO #SearchWords
(word, value)
VALUES ('Jack',10)
,('Pontiac',20)
,('Bloggs',40);
--** Example SELECT with #Name selected and ordered by words and values in #SearchWords
WITH Order_CTE (weighting, id)
AS (
SELECT SUM(sw.value) AS weighting
, id
FROM #Name AS n
JOIN #SearchWords AS sw
ON n.name LIKE '%' + sw.word + '%'
GROUP BY id
)
SELECT n.name
, cte.weighting
FROM #Name AS n
JOIN Order_CTE AS cte
ON n.id = cte.id
ORDER BY cte.weighting DESC;
Seems to me that the best thing to do would be to maintain a separate table with all the individual words. Eg:
ID Word FK_ID
1 Jack 1
2 Nicholson 1
3 Henry 2
(etc)
This table would be kept up to date with triggers, and you'd have a non-clustered index on 'Word', 'FK_ID'. Then the SQL to produce your weightings would be simple and efficient.
How about something like this....
Select id, MAX(names.name), count(id)*10 from names
inner join #SearchWords as sw on
names.name like '%'+sw.word+'%'
group by id
assuming that table with names called "names".
Is there any way to group by all the columns of a table without specifying the column names? Like:
select * from table group by *
The DISTINCT Keyword
I believe what you are trying to do is:
SELECT DISTINCT * FROM MyFooTable;
If you group by all columns, you are just requesting that duplicate data be removed.
For example a table with the following data:
id | value
----+----------------
1 | foo
2 | bar
1 | foo
3 | something else
If you perform the following query which is essentially the same as SELECT * FROM MyFooTable GROUP BY * if you are assuming * means all columns:
SELECT * FROM MyFooTable GROUP BY id, value;
id | value
----+----------------
1 | foo
3 | something else
2 | bar
It removes all duplicate values, which essentially makes it semantically identical to using the DISTINCT keyword with the exception of the ordering of results. For example:
SELECT DISTINCT * FROM MyFooTable;
id | value
----+----------------
1 | foo
2 | bar
3 | something else
If you are using SqlServer the distinct keyword should work for you. (Not sure about other databases)
declare #t table (a int , b int)
insert into #t (a,b) select 1, 1
insert into #t (a,b) select 1, 2
insert into #t (a,b) select 1, 1
select distinct * from #t
results in
a b
1 1
1 2
I wanted to do counts and sums over full resultset. I achieved grouping by all with GROUP BY 1=1.
Short answer: no. GROUP BY clauses intrinsically require order to the way they arrange your results. A different order of field groupings would lead to different results.
Specifying a wildcard would leave the statement open to interpretation and unpredictable behaviour.
nope. are you trying to do some aggregation? if so, you could do something like this to get what you need
;with a as
(
select sum(IntField) as Total
from Table
group by CharField
)
select *, a.Total
from Table t
inner join a
on t.Field=a.Field
No because this fundamentally means that you will not be grouping anything. If you group by all columns (and have a properly defined table w/ a unique index) then SELECT * FROM table is essentially the same thing as SELECT * FROM table GROUP BY *.
Here is my suggestion:
DECLARE #FIELDS VARCHAR(MAX), #NUM INT
--DROP TABLE #FIELD_LIST
SET #NUM = 1
SET #FIELDS = ''
SELECT
'SEQ' = IDENTITY(int,1,1) ,
COLUMN_NAME
INTO #FIELD_LIST
FROM Req.INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = N'new340B'
WHILE #NUM <= (SELECT COUNT(*) FROM #FIELD_LIST)
BEGIN
SET #FIELDS = #FIELDS + ',' + (SELECT COLUMN_NAME FROM #FIELD_LIST WHERE SEQ = #NUM)
SET #NUM = #NUM + 1
END
SET #FIELDS = RIGHT(#FIELDS,LEN(#FIELDS)-1)
EXEC('SELECT ' + #FIELDS + ', COUNT(*) AS QTY FROM [Req].[dbo].[new340B] GROUP BY ' + #FIELDS + ' HAVING COUNT(*) > 1 ')
You can use Group by All but be careful as Group by All will be removed from future versions of SQL server.