Retrieve column value using column name in SQL Server - sql

I am using SSMS 2014, I want to retrieve column value using column name in SQL Server. The user can select any column from the table and retrieve the value of that column.
Example:
exec employee Name 'karl'
Output as follows:
| Id | Name | ManagerId | ManagerName | Gender | Dept |
| 5 | Karl | 1 | Luke | M | 1 |
I am creating procedure to resolve this but i am not getting any value in output.
Create proc sp_getEmpDetail
#colname varchar(50),
#colvalue varchar(50)
as
Select *
from employees1
where #colname = #colvalue
I am not getting any value in output.
When I am debugging it then variables are getting value which I supplied.
Please help to resolve this.
Thanks in advance.

--for this you need to create a dynamic Query
Create proc sp_getEmpDetail
#colname varchar(50),
#colvalue varchar(50)
AS
DECLARE #Sql_String NVARCHAR(MAX)
SET #Sql_String='Select * from employees1 where '+#colname+' = '''+#colvalue+''''
PRINT #Sql_String
EXEC(#Sql_String)
END

Related

SQL pivot table1 into table2 without using dynamic SQL pivot or hardcode query

I have seen many questions and answers given about pivoting table with SQL, with dynamic SQL pivot or hard code query with CASE WHEN.
However is there any way I can pivot table without using those 2?
Table 1:
| col1 | col2 | col3 |
|--------|-------|--------|
| ABCD | 1 | XY123 |
| ABCD | 2 | RT789 |
| PQST | 3 | XY123 |
| PQST | 4 | RT789 |
Pivoting to
| col1 | ABCD | PQST |
|--------|-------|-------|
| XY123 | 1 | 3 |
| RT789 | 2 | 4 |
My idea was to retrieve the structure of the col with:
WITH
structure AS (
SELECT DISTINCT
col3 AS col1, col1 AS colName, col2 AS values
FROM table1 ori
)
and then extracting matched values of each cell with joins and storing them temporarily. At last JOIN again populating them in the output. However I am stuck after the above step. I can't use PIVOT and have to do this dynamically (i.e. can't use the method to hardcode each value with CASE WHEN)
How can I achieve this?
This is not as efficient (and not as easy to code) as a dynamic pivot. However, it is doable.
It does all need to be dynamic e.g., creating each SQL statement as a string and executing that.
The process involves
Determine the column names (store in a temporary table)
Creating the table with the first column only
Populating that first column
For each additional column name
Adding a column to the table (dynamically)
Populating that column with data
You haven't specified the database - I'll illustrate the following below using SQL Server/T-SQL.
The following are in this db<>fiddle so you can see what's going on.
CREATE TABLE #ColNames (ColNum int IDENTITY(1,1), ColName nvarchar(100), ColNametxt nvarchar(100));
INSERT INTO #ColNames (ColName, ColNametxt)
SELECT DISTINCT QUOTENAME(Col1), Col1
FROM table1;
This will populate the #ColNames table with the values 1, [ABCD], ABCD, 2, [PQST], PQST.
The next step is to create your output table - I'll call it #pvttable
CREATE TABLE #pvttable (col1 nvarchar(100) PRIMARY KEY);
INSERT INTO #pvttable (col1)
SELECT DISTINCT Col3
FROM table1;
This creates your table with 1 column (col1) with values XY123 and RT789).
The write your favorite loop (e.g., cursor, while loop). In each step
Get the next column name
Add the column to the table
Update that column with appropriate data
e.g., the following is an illustrative example with your data.
DECLARE #CustomSQL nvarchar(4000);
DECLARE #n int = 1;
DECLARE #ColName nvarchar(100);
DECLARE #ColNametxt nvarchar(100);
SELECT #ColName = ColName,
#ColNameTxt = ColNameTxt
FROM #ColNames
WHERE ColNum = #n;
WHILE #ColName IS NOT NULL
BEGIN
SET #CustomSQL = N'ALTER TABLE #pvttable ADD ' + #ColName + N' nvarchar(100);';
EXEC (#CustomSQL);
SET #CustomSQL =
N'UPDATE #pvttable SET ' + #Colname + N' = table1.col2'
+ N' FROM #pvttable INNER JOIN table1 ON #pvttable.col1 = table1.col3'
+ N' WHERE table1.col1 = N''' + #ColNametxt + N''';';
EXEC (#CustomSQL);
SET #n += 1;
SET #ColName = NULL;
SET #ColNametxt = NULL;
SELECT #ColName = ColName,
#ColNameTxt = ColNameTxt
FROM #ColNames
WHERE ColNum = #n;
END;
SELECT * FROM #pvttable;

SQL Server - validate table column values against with values provided

I have the following task that want to solve using SQL Server's query and/or stored procedure, and I would really appreciated if someone can give me some directions.
Basically we have a data warehouse based on SQL Server. I want to validate columns in certain tables to ensure the values in these columns are valid.
Example as below:
Table1 ColumnsToValidate specifies the table/columns in which values need to be validated. In the example I want to validate the Gender column of the Customer table, and the State column of the Address table. And the validationID is a foreign-key to a table holding all the valid values (Table2).
Table2 ValidationValues: this table holds all valid values for specific validation rules. In the example, validation rule #1 (ValidationID = 1) has two valid values, and validation rule #2 specified 3 valid values.
I'd like to (using SQL) dynamically create a query based on values in Table 1, which accordingly selects the Customer.Gender column and the Address.State column, so the values in these columns can be validated against the values in Table 2.
Table1: ColumnsToValidate
TableName | ColumnName | ValidationID
-----------+----------------+-----------------
Customer | Gender | 1
Address | State | 2
Table2: ValidationValues
ValidationID | Values
-------------+----------------
1 | Male
1 | Female
2 | NY
2 | WA
2 | CA
Table3: Customer
CustomerID | Gender
-----------+----------------
111 | Male
112 | Female
113 | Unknown
114 | NULL
Table4: Address
AddressID | State
-----------+----------------
211 | AL
212 | NY
213 | WA
214 | NULL
EDIT: I could write this in a C# program, but the program will be slow. I would think there could be a way in pure SQL (SQL Server)
Here is my solution, in two steps -
1) To validate the values "as if" the target table and column are known, e.g. the following outer join query finds invalid values on the Customer.Gender column.
select * from Customer a
left join ValidationValues b
on a.Gender = b.values
and a.ValidationID = b.ValidationID
where b.values is null
2) Use dynamic-SQL to generate above SQL script, using values from table ColumnsToValidate, substituting table-name 'Customer' and column-name 'Gender' with variables #tab and #col1 :
declare #tableCursor cursor,
#tab varchar(100),
#col1 varchar(100),
#val_id varchar(20)
set #tableCursor = cursor for select * from ColumnsToValidate
open #tableCursor
fetch next from #tableCursor into #tab, #col1, #val_id
while(##fetch_status = 0)
begin
--dynamic sql
declare #sql varchar(max)
set #sql =
N'select * from '+ #tab +' a ' +
N'left join ValidationValues b ' +
N'on a.' + #col1 + ' = b.values ' +
N'and a.' + #val_id + ' = b.ValidationID ' +
N'where b.values is null'
--print #sql
exec #sql
fetch next from #tableCursor into #tab, #col1, #val_id
end
close #tableCursor
deallocate #tableCursor
As being said, these are mock-up codes and they are not tested. However I'd just to share the ideas to people having similar problems - the key to the solution is "Dynamic SQL".

Fetch specific column and row data based on row values from another table in SQL server

Does anyone know if it is possible to fetch data from a column in a table based on row values from another table?
e.g.
Table 1:
Name| Date
----|-----
Bob | D1
Jon | D2
Stu | D3
Amy | D4
Table 2:
Date |Bob |Jon |Stu |Amy
-----|----|----|----|----
D1 | A | B | C | D
D2 | B | C | D | A
D3 | C | D | A | B
D4 | D | A | B | C
I need to match the date but bring through the correct letter for each name
So Table 3 would be:
Name| Date | Letter
----|------|-------
Bob | D1 | A
Jon | D2 | C
Stu | D3 | A
Amy | D4 | C
Any suggestions are welcome.
thanks
If you are looking for way without column hardcodes, you can try this.
Lets your tables has names #table1, #table2. Then:
select
[Name] = [t1].[Name]
,[Date] = [t1].[Date]
,[Letter] = [col2].[value]('.', 'varchar(10)')
from
#table1 as [t1]
cross apply
(
select [t2_xml] = cast((select * from #table2 for xml path('t2')) as xml)
) as [t2]
cross apply
[t2].[t2_xml].[nodes]('t2[Date/text()=sql:column("[t1].[Date]")]') as [tab]([col])
cross apply
[col].[nodes]('*[local-name(.)=sql:column("[t1].[Name]")]') as [tab2]([col2]);
There are many ways to achieve the desired output. My solution uses a combination of cursors and dynamic TSQL.
Here is the code, commented step by step:
--1. create test tables
create table table1 ([Name] nvarchar(50),[Date] nvarchar(50))
create table table2 ([Date] nvarchar(50),Bob nvarchar(50),Jon nvarchar(50),Stu nvarchar(50),Amy nvarchar(50))
create table table3 ([Name] nvarchar(50),[Date] nvarchar(50),[Letter] nvarchar(50))
--2. populate test tables
insert into table1
select 'Bob','D1'
union all select 'Jon','D2'
union all select 'Stu','D3'
union all select 'Amy','D4'
insert into table2
select 'D1','A','B','C','D'
union all select 'D2','B','C','D','A'
union all select 'D3','C','D','A','B'
union all select 'D4','D','A','B','C'
--3. declare variables
DECLARE #query NVARCHAR(max); --this variable will hold the dynamic TSQL query
DECLARE #name NVARCHAR(50);
DECLARE #date NVARCHAR(50);
DECLARE #result NVARCHAR(50) --this variable will hold "letter" value returned by dynamic TSQL query
DECLARE #testCursor CURSOR;
--4. define the cursor that will scan all rows in table1
SET #testCursor = CURSOR FOR SELECT [Name], [Date] FROM table1;
OPEN #testCursor;
FETCH NEXT FROM #testCursor INTO #name, #date;
WHILE ##FETCH_STATUS = 0
BEGIN
--5. for each row in table 1 create a dynamic query that retrieves the correct "Letter" value from table2
set #query = 'select #res=' + #name + ' from table2 where [Date] =''' + #date +''''
--6. executes dynamic TSQL query saving result in #result variable
EXECUTE sp_executesql #query, N'#res nvarchar(50) OUTPUT', #res=#result OUTPUT
--inserts data in table3 that holds final results
insert into table3 select #name, #date, #result
FETCH NEXT FROM #testCursor INTO #name, #date;
END
CLOSE #testCursor;
DEALLOCATE #testCursor;
select * from table1
select * from table2
select * from table3
Here are the results. The first two tables show the inputs, the third table contains the actual results:

Find a value from all table columns

I am building a functionality that will filter a data on all column table.
Let's say I have this table:
-------------------------------------
| ID | NAME | Address | Remarks |
| 1 | Manny | Phil | Boxer-US |
| 2 | Timothy | US | Boxer |
| 3 | Floyd | US | Boxer |
| 4 | Maidana | US | Boxer |
| 5 | Marquez | MEX | Boxer |
-------------------------------------
I search for "US", it should give me IDs 1-4 since "US" exists in their columns.
I could have this to filter it:
SELECT ID FROM tbl_Boxers
WHERE ID LIKE '%US%' OR NAME LIKE '%US%' OR Address LIKE '%US%' OR Remarks LIKE '%US%'
But I'm trying to avoid a long WHERE clause here since in actual, I have around 15 columns to look at.
Is there any other way to minimize the where clause?
Please help.
Thanks
The way this is normally done is to make a 'Searchable Field' column where you concatinate all search columns into one field for searching.
So while that provides an easier overview and querying it adds some management of the data you need to be aware off on inserts and updates.
Also on its own - it's not an optimal way of searching, so if performance is important - then you should look to implement full text search.
So the question is where you want to have the 'overhead' and whether that functionality your building is going to be run often or just once in a while.
If it is the former, and performance is important - look to full text. If it is just a once-in-a-while query, then I'd properly just do the long WHERE clause myself to avoid adding more overhead on the maintenance of the data
Check the following solution. Here the query is generated dynamically based on the column names in your table.
This is applicable if the given table is a physical table. This solution wont work for temporary tables or table variables.
BEGIN TRAN
--Simulate your table structure
--Should be a physical table. Cannot be a temp table or a table variable
CREATE TABLE TableA
(
ID INT,
NAME VARCHAR(50),
ADDRESS VARCHAR(50),
REMARKS VARCHAR(50)
)
--Added values for testing
INSERT INTO TableA(ID, name , address ,remarks) VALUES(1,'Manny','Phil','Boxer-US')
INSERT INTO TableA(ID, name , address ,remarks) VALUES(2,'Timothy','US','Boxer')
INSERT INTO TableA(ID, name , address ,remarks) VALUES(3,'Floyd','US','Boxer')
INSERT INTO TableA(ID, name , address ,remarks) VALUES(4,'Maidana','US','Boxer')
INSERT INTO TableA(ID, name , address ,remarks) VALUES(5,'Marquez',' MEX','Boxer')
--Solution Starts from here
DECLARE #YourSearchValue VARCHAR(50)--Will be passed
SET #YourSearchValue = 'US' --Simulated passed value
CREATE TABLE #TableCols
(
ID INT IDENTITY(1,1),
COLUMN_NAME VARCHAR(1000)
)
INSERT INTO #TableCols
(COLUMN_NAME)
SELECT COLUMN_NAME
FROM information_schema.columns
WHERE table_name = 'TableA';
DECLARE #STARTCOUNT INT, #MAXCOUNT INT, #COL_NAME VARCHAR(1000), #QUERY VARCHAR(8000), #SUBQUERY VARCHAR(8000)
SELECT #STARTCOUNT = 1, #MAXCOUNT = MAX(ID) FROM #TableCols;
SELECT #QUERY = '', #SUBQUERY = ''
WHILE(#STARTCOUNT <= #MAXCOUNT)
BEGIN
SELECT #COL_NAME = COLUMN_NAME FROM #TableCols WHERE ID = #STARTCOUNT;
SET #SUBQUERY = #SUBQUERY + ' CONVERT(VARCHAR(50), ' + #COL_NAME + ') LIKE ''%' + #YourSearchValue + '%''' + ' OR ';
SET #STARTCOUNT = #STARTCOUNT + 1
END
SET #SUBQUERY = LEFT(#SUBQUERY, LEN(#SUBQUERY) - 3);
SET #QUERY = 'SELECT * FROM TableA WHERE 1 = 1 AND (' + #SUBQUERY + ')'
--PRINT (#QUERY);
EXEC (#QUERY);
ROLLBACK
Hope this helps.

Master SQL Query - Find all colums in 1 table which have bigger type than containing maximum value

Alright for example i have varchar 200 column but it has maximum 25 varchar value. So this query should return this column. This query should run through all columns in selected table and return all this kind of results. So we can examine column structure and take appropriate action.
What i mean is : "find me all columns that are defined as wider than the largest actual data value in them"
You didn't indicate what RDBMS so I took as general of an approach as I could. The INFORMATION_SCHEMA tables are generally supported by large RDBMS installations (it's a standard, for all that it is worth). If not, tweak this to suit to meta data available as well as syntax oddities. The approach itself is sound.
SET NOCOUNT ON;
-- Create a local table for processing
DECLARE
#RESIZING TABLE
(
resizing_id int identity(1,1) NOT NULL PRIMARY KEY
, schemaname sysname
, tablename sysname
, columnname sysname
, datatype sysname
, max_length int
, current_length int
);
-- Use the ANSI standard view for system meta data
-- to populate tables. Only focusing on varchar fields
-- as they are the only ones that will provide useful
-- sizing information.
-- To make it work with chars, update the query individual
-- queries to perform a trim operation before calling len
-- and obviously update the data_type to include char
INSERT INTO
#RESIZING
SELECT
ISC.TABLE_SCHEMA
, ISC.TABLE_NAME
, ISC.COLUMN_NAME
, ISC.DATA_TYPE
, ISC.CHARACTER_MAXIMUM_LENGTH
, NULL AS current_length
FROM
INFORMATION_SCHEMA.COLUMNS ISC
WHERE
ISC.DATA_TYPE = 'varchar';
-- Create a cursor
-- Kill a kitten
DECLARE
Csr CURSOR
FOR
SELECT
-- build out a query like
-- SELECT #current_len = MAX(LEN(X.[COLUMN_NAME])) FROM [dbo].[TABLE] X WITH(NOLOCK)
'SELECT #current_len = MAX(LEN(X.['
+ R.columnname
+ '])) FROM ['
+ R.schemaname
+ '].['
+ R.tablename
+ '] X WITH(NOLOCK) ' AS query
, R.current_length
FROM
#RESIZING R
FOR UPDATE OF current_length;
-- 2 local variables, one for the dynamic query
-- one to hold the results of said query
DECLARE
#query nvarchar(max)
, #current_length int;
OPEN
Csr;
FETCH NEXT
FROM Csr
INTO #query, #current_length;
WHILE ##FETCH_STATUS = 0
BEGIN
-- try before you buy
PRINT #query;
-- Run the query, assigning length to #current_length variable
EXECUTE sp_executesql #query, N'#current_len int OUTPUT', #current_len = #current_length OUTPUT;
-- Push results int our temporary table
UPDATE
R
SET
current_length = #current_length
FROM
#RESIZING R
WHERE
CURRENT OF Csr;
FETCH NEXT
FROM Csr
INTO #query, #current_length;
END
CLOSE Csr;
DEALLOCATE Csr;
-- Result the resultset for all the
-- tables with longer lengths than used
-- (NULLS included)
SELECT
*
FROM
#RESIZING R
WHERE
R.max_length > isnull(#current_length, 0)
Results (SQL Server 2008 R2)
resizing_id | schemaname | tablename | columnname | datatype | max_length | current_length
1 | dbo | DupesOk | name | varchar | 50 | 12
2 | dbo | SALES_HEADER | CCCode | varchar | 15 | 15
3 | lick | ABC | value | varchar | 50 | 8