Using column value as column name in subquery - sql

I'm working with a legacy DB that has a table that houses field names from other tables.
So I have this structure:
Field_ID | Field_Name
*********************
1 | Col1
2 | Col2
3 | Col3
4 | Col4
and I need to pull a list of this field metadata along with the values of that field for a given user. So I need:
Field_ID | Field_Name | Value
1 | Col1 | ValueOfCol1onADiffTable
2 | Col2 | ValueOfCol2onADiffTable
3 | Col3 | ValueOfCol3onADiffTable
4 | Col4 | ValueOfCol4onADiffTable
I'd like to use the Field_Name in a subquery to pull that value, but can't figure out how to get SQL to evaluate Field_Name as a column in the sub-query.
So something like this:
select
Field_ID
,Field_Name
,(SELECT f.Field_Name from tblUsers u
where u.User_ID = #userId) as value
from
dbo.tblFields f
But that just returns Field_Name in the values column, not the value of it.
Do I need to put the sub-query in a separate function and evaluate that? Or some kind of dynamic SQL?

In SQL server this would require dynamic SQL and UNPIVOT notation.
see working demo
create table tblFields (Field_ID int ,Field_Name varchar(10));
insert into tblFields values
(1,'Col1')
,(2,'Col2')
,(3,'Col3')
,(4,'Col4');
declare #userId int
set #userId=1
create table tblUsers (User_ID int, col1 varchar(10),col2 varchar(10));
insert into tblUsers values
(1, 10,100),
(2,20,200);
declare #collist varchar(max)
declare #sqlquery varchar(max)
select #collist= COALESCE(#collist + ', ', '') + Field_Name
from dbo.tblFields
where exists (
select * from sys.columns c join sys.tables t
on c.object_id=t.object_id and t.name='tblUsers'
and c.name =Field_Name)
select #sqlquery=
' select Field_ID ,Field_Name, value '+
' from dbo.tblFields f Join '+
' ( select * from '+
'( select * '+
' from tblUsers u '+
' where u.User_ID = '+ cast(#userId as varchar(max)) +
' ) src '+
'unpivot ( Value for field in ('+ #collist+')) up )t'+
' on t.field =Field_Name'
exec(#sqlquery)

Related

Select all columns with specific value

I have a table that has around 200 different boolean columns. These are basically On/Off switches used to blacklist data in another application. This also has multiple rows for different functionalities within said application.
As you can imagine, keeping a good overview over which columns are "turned on" for a specific function is rather tiresome when you have to manually check them against some excel sheet, so I want to make my life easier by only displaying columns that are turned on/set to true.
Something like:
select [columns with value '1']
from table
where function = 'function1'
Where this table:
+-----------+------+------+------+------+------+
| function | Col1 | Col2 | Col3 | Col4 | Col5 |
+-----------+------+------+------+------+------+
| function1 | 1 | 0 | 1 | 1 | 0 |
| function2 | 0 | 1 | 0 | 0 | 0 |
+-----------+------+------+------+------+------+
returns this:
+----------+------+-------+-----+
| function | Col1 | Col3 | Col4 |
+-----------+------+------+------+
| function1 | 1 | 1 | 1 |
+-----------+------+------+------+
Is there any way to do something like this?
As is mentioned in the comments, result columns are defined independent of table data, but the following approach, which returns the columns names as a single column, is a possible solution:
Table:
CREATE TABLE Data (
[Function] varchar(3),
Col1 bit,
Col2 bit,
Col3 bit,
Col4 bit,
Col5 bit
)
INSERT INTO Data ([Function], Col1, Col2, Col3, Col4, Col5)
VALUES ('xyz', 1, 0, 1, 1, 1), ('abc', 0, 0, 0, 0, 1)
Dynamic statement:
DECLARE #stm nvarchar(max) = N''
SELECT #stm = CONCAT(#stm, ',(','''', col.[name], ''', ', col.[name], ')')
FROM sys.columns col
JOIN sys.tables tab ON col.object_id = tab.object_id
JOIN sys.schemas sch ON tab.schema_id = sch.schema_id
JOIN sys.types typ ON col.system_type_id = typ.system_type_id
WHERE
tab.[name] = 'Data' AND
sch.[name] = 'dbo' AND
col.[name] != 'Function'
ORDER By col.[name]
SELECT #stm = CONCAT(
'SELECT d.[Function], STRING_AGG(v.ColName, '','') AS [Columns] FROM Data d CROSS APPLY (VALUES ',
STUFF(#stm, 1, 1, ''),
') v(ColName, ColVal) WHERE v.ColVal = 1 GROUP BY d.[Function]'
)
PRINT #stm
EXEC sp_executesql #stm
Result:
Function Columns
abc Col5
xyz Col1,Col3,Col4,Col5
This is an example, I hope it helps, it's a little tricky, but gets what you want, at least could it helps to your progress, if you have any doubts send me a message or comment, good luck.
IF OBJECT_ID('dbo.TestMr') IS NOT NULL DROP TABLE TestMr
IF OBJECT_ID('tempdb.dbo.#TestMrColumns') IS NOT NULL DROP TABLE #TestMrColumns
CREATE TABLE TestMr(x_function varchar(50),col1 numeric, col2 numeric, col3 numeric)
CREATE TABLE #TestMrColumns(y_function varchar(50),valor varchar(50),columna varchar(50))
--TEST VALUES
INSERT INTO TestMr VALUES('fun1',1,0,1)
INSERT INTO TestMr VALUES('fun2',0,1,0)
DECLARE #script nvarchar(max)
DECLARE #cols nvarchar(max)
--We get the columns from our table, this columns have to be of the same data_type or this wont work.
SET #cols = STUFF((SELECT ',' + QUOTENAME(COLUMN_NAME) FROM INFORMATION_SCHEMA.COLUMNS where TABLE_NAME='TestMr' and DATA_TYPE='numeric' FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)'),1,1,'')
--we make that columns rows to save them on a tempTable
set #script='select * from TestMr unpivot(valor for columna in ('+#cols+')) unpiv'
insert into #TestMrColumns exec (#script);
--we get the final columns for our select, here we can apply conditions for the columns that we want, in this case, we get
--the columns that had valor=1 and y_function=fun1/
SET #cols = STUFF((SELECT ',' + QUOTENAME(columna) FROM #TestMrColumns where valor=1 and y_function='fun1' FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)'),1,1,'')
--final select
set #script='select x_function,'+#cols+' from TestMr where x_function=''fun1'' '
exec(#script)

Update columns in multiple tables by names pulled from a temporary table

I have a temp table where various table names and connected column names are stored. If I were to run a simple SELECT on it the results would look something like this:
----------------
TableName | ColumnName
------------------
Users | RoleId
Tables | OwnerId
Chairs | MakerId
etc...
I'm looking for a way to set mentioned column values in the connected tables to NULL.
I know how to do it via a CURSOR or a WHILE loop by processing each row individually but I'm trying to eliminate these performance hoarders from my stored procedures.
Is there any way to build a join by table names from the TableName column to the actual tables to then set connected ColumnName column values to NULL ?
Check this Script-
IF OBJECT_ID('SampleTable') IS NOT NULL
DROP TABLE SampleTable
CREATE TABLE SampleTable
(
Table_Name VARCHAR(50) NOT NULL,
Column_Name VARCHAR(50) NOT NULL
)
GO
INSERT INTO SampleTable
VALUES
('Users','RoleId'),('Tables','OwnerId'),('Chairs','MakerId') --Give your Combo here
GO
--Check this scripts
SELECT 'UPDATE ' + QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(S1.TABLE_NAME) +
' SET ' + QUOTENAME(S1.COLUMN_NAME) + ' = NULL ; '
AS [Dynamic_Scripts]
FROM SampleTable S JOIN INFORMATION_SCHEMA.COLUMNS S1 ON s.Table_Name=s1.Table_Name and s.Column_Name=s1.Column_Name
--Check this scripts (multiple column single script; 1 table 'n' column - 1 update query)
SELECT 'UPDATE ' + CONCAT('[',TABLE_SCHEMA,'].[',S1.TABLE_NAME,'] SET ') + STRING_AGG(CONCAT('[',S1.COLUMN_NAME,']=NULL'),',') + ' ; ' AS [Dynamic_Scripts]
FROM SampleTable S JOIN INFORMATION_SCHEMA.COLUMNS S1 ON s.Table_Name=s1.Table_Name and s.Column_Name=s1.Column_Name
GROUP BY CONCAT('[',TABLE_SCHEMA,'].[',S1.TABLE_NAME,'] SET ')
Try this,
IF OBJECT_ID('SampleTable') IS NOT NULL
DROP TABLE SampleTable
CREATE TABLE SampleTable
(
Table_Name VARCHAR(50) NOT NULL,
Column_Name VARCHAR(50) NOT NULL
)
GO
INSERT INTO SampleTable
VALUES
('Users','RoleId'),('Tables','OwnerId'),('Chairs','MakerId')
,('Users','Appid'),('Tables','Column') --Give your Combo here
GO
declare #Sql nvarchar(1000)=''
;with CTE as
(
select QUOTENAME(a.Table_Name)Table_Name
,stuff((select ','+QUOTENAME(Column_Name),'=null'
from SampleTable B
where a.Table_Name=b.Table_Name for xml path('') ),1,1,'')UpdateCol
from SampleTable A
group by a.Table_Name
)
select #Sql=coalesce(#Sql+char(13)+char(10)+SingleUpdate,SingleUpdate)
from
(
select CONCAT('Update ',Table_Name,' ','SET ',UpdateCol)SingleUpdate
from cte
)t4
print #Sql
select #Sql
Execute sp_executeSql #Sql

sql server sort dynamic pivot on large set of data

I am having trouble sorting a pivot based on a quite large set of data. I have looked at many examples, but none of them seems to address the issue of volume - or perhaps I am just missing something. I have had a very good look here: Sort Columns For Dynamic Pivot and PIVOT in sql 2005 and found much good advise, but I still cannot find the correct way to sort my pivot.
I am using the following sql. It pivots the columns, but the result needs to be sorted for readability:
SELECT a.* INTO #tempA
FROM (SELECT top (5000) id, email, CONVERT(varchar,ROW_NUMBER() OVER
(PARTITION BY email ORDER BY id)) AS PIVOT_CODE FROM Email) a
order by PIVOT_CODE
DECLARE #cols AS NVARCHAR(MAX),
#sql AS NVARCHAR(MAX)
SELECT #cols =STUFF((SELECT DISTINCT ', ' + QUOTENAME(col)
FROM #tempA WITH (NOLOCK)
cross apply
(
SELECT 'id_' + PIVOT_CODE, id
) c (col, so)
group by col, so
--order by col
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #sql = 'SELECT email, '
+#cols+
'INTO ##AnotherPivotTest FROM
(
SELECT email,
col,
value
FROM #tempA WITH (NOLOCK)
cross apply
(
values
(''id_'' + PIVOT_CODE, id)
) c (col, value)
) d
pivot
(
max(value)
for col in ('
+ #cols+
')
) piv'
EXEC (#sql)
SELECT * FROM ##AnotherPivotTest
The result is a chaos to look at:
==============================================================================================
| email | id_19 | id_24 | id_2 | id_16 | id_5 | id_9 | id_23 | .... | id_1 | .... | id_10 |
==============================================================================================
| xx#yy.dk | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 1234 | NULL | NULL |
==============================================================================================
I would very much like the Ids to be sorted - beginning with id_1.
As you can see, I have attempted to place an 'order by' in the selection for 'cols', but that gives me the error: "ORDER BY items must appear in the select list if SELECT DISTINCT is specified." And without DISTINCT, I get another error: "The number of elements in the select list exceeds the maximum allowed number of 4096 elements."
I'm stuck, so any help will be greatly appreciated!
Not sure what causes the problem but I've solved my order problem in my pivot table by inserting the data coming from tempA into another temp table and ordering them there
INSERT INTO #tempB
SELECT * FROM #tempA
ORDER BY PIVOT_CODE
Then selecting distinct ones like so:
SELECT #cols = #cols + QUOTENAME(PIVOT_CODE) + ',' FROM (SELECT DISTINCT PIVOT_CODE FROM #tempB ORDER BY PIVOT_CODE)
SELECT #cols = SUBSTRING(#cols, 0, LEN(#cols)) --trims "," at end
You can also just use a cursor to determine your cols and the order them
Cursor with cols ordered
declare #gruppe nvarchar(max)
declare #gruppeSql nvarchar(max)
declare #SQL nvarchar(max)
DECLARE myCustomers CURSOR FOR
select top 10 FirstName from [dbo].[DimCustomer] Order by FirstName
set #gruppeSql = ''
OPEN myCustomers
FETCH NEXT FROM myCustomers INTO #gruppe
IF (##FETCH_STATUS>=0)
BEGIN
SET #gruppeSql = #gruppeSql +'[' +#gruppe+']'
FETCH NEXT FROM myCustomers INTO #gruppe
END
WHILE (##FETCH_STATUS<>-1)
BEGIN
IF (##FETCH_STATUS<>-2)
SET #gruppeSql = #gruppeSql + ',[' +#gruppe+']'
FETCH NEXT FROM myCustomers INTO #gruppe
END
CLOSE myCustomers
DEALLOCATE myCustomers
SET #gruppeSql = replace(#gruppesql,'''','')
/*Select to preview your cols*/
select #gruppeSql
Dynamic pivot
SET #SQL = '
 Select *
from
(
SELECT SalesAmount, FirstName
FROM [AdventureWorksDW2014].[dbo].[FactInternetSales] a inner join dbo.DimCustomer b on a.CustomerKey = b.CustomerKey
) x
pivot
(
sum(SalesAmount)
for FirstName in ('+#gruppesql+')
) p'
print #sql
exec(#sql)

Append "_Repeat" to Ambiguous column names

I have a query that joins a table back onto itself in order to display orders that generated a repeat within a certain window.
The table returns something like the following:
id | value | note | id | value | note
------------------------------------------------------
01 | abcde | .... | 03 | zyxxx | ....
06 | 12345 | .... | 09 | 54321 | ....
In actuality, the table returns over 150 columns, so when the join occurs, I end up with 300 columns. I end up having to manually rename 150 columns to "id_Repeat","value_Repeat","note_Repeat" etc...
I'm looking for some way of automatically appending "_Repeat" to the ambiguous columns. Is this possible in T-SQL, (Using SQL Server 2008) or will I have to manually map out each column using:
SELECT [value] AS [value_Repeat]
The only way I can see this working is to construct some dynamic SQL (ugh!). I put together a quick example of how this might work:
CREATE TABLE test1 (id INT, note VARCHAR(50));
CREATE TABLE test2 (id INT, note VARCHAR(20));
INSERT INTO test1 SELECT 1, 'hello';
INSERT INTO test2 SELECT 1, 'world';
DECLARE #SQL VARCHAR(4096);
SELECT #SQL = 'SELECT ';
SELECT #SQL = #SQL + t.name + '.' + c.name + CASE WHEN t.name LIKE '%test2%' THEN ' AS ' + c.name + '_repeat' ELSE '' END + ','
FROM sys.columns c INNER JOIN sys.tables t ON t.object_id = c.object_id WHERE t.name IN ('test1', 'test2');
SELECT #SQL = LEFT(#SQL, LEN(#SQL) - 1);
SELECT #SQL = #SQL + ' FROM test1 INNER JOIN test2 ON test1.id = test2.id;';
EXEC(#SQL);
SELECT #SQL;
DROP TABLE test1;
DROP TABLE test2;
Output is:
id note id_repeat note_repeat
1 hello 1 world
This isn't possible in T-SQL. A column will have the name it had in its source table, or any alias name you specify, but there is no way to systematically rename them.
For cases like this, it pays off to take it one level higher: write some code (using sys.columns) that generates the query you're after, including renames. Why do something manually for 150 columns when you have a computer at your disposal?

SQL Server 2008 select column from comma separated value

Table 1:
Id | Name
1 | Example1
2 | Example2
Table 2:
Id | Table1_IDs
1 | 1,2
2 | 2
I want to select table1 from table2 using table1_IDs like:
select *
from table1
where id in (select t.table1_IDs from table2 t)
You can build query string end then use sp_executesql stored procedure to run it. Something like this (not tested)
declare #ids varchar(2000)
select #ids = coalesce(#ids + ',', '') + convert(varchar(100), table1_IDs)
from table2
declare #query nvarchar(2000) = 'select * from table1 where id in (' + #ids + ')'
execute sp_executesql #query
But this is not good idea to use coma separated values becaouse it decrease perfomance. Try to refactor yours tables.
Any other options with same structure?
select * from table1 ta join
(
SELECT id,
LTRIM(RTRIM(m.n.value('.[1]','varchar(8000)'))) AS Certs
from (
SELECT id,CAST('<XMLRoot><RowData>' + REPLACE(value,',','</RowData><RowData>') + '</RowData></XMLRoot>' AS XML) AS x
FROM table2
) t
CROSS APPLY x.nodes('/XMLRoot/RowData')m(n)) b on b.id=ta.id
break table2 comma delemeted into rows and use with Table1 in Join.