Updating sql databe values - sql

In table Requisitions i have tow columns RequisitionID and Code.
I need to update Code value based on RequisitionID value in this format.
RN-000RequisitionID/2017 so output will be RN-0001/2017 for example if RequisitionID =1
i have tried the below query but it didn't work.
update [dbo].[Requisitions] set [Code]='RN-000 "'RequisitionID'"/2017'

Modification in your query:
update [dbo].[Requisitions] set [Code]='RN-000'+RequisitionID+'/2017'
if above didn't work, use:
update [dbo].[Requisitions] set [Code]='RN-000'+CONVERT(VARCHAR,RequisitionID)+'/2017'
Hope it helps.

You need to us a + to CONCATENATE them together. You will also have to cast 2017 to a VARCHAR so that the + operator isn't interpreted as addition, rather concatenation.
declare #table table (RequisitionID int, Code varchar(64))
insert into #table values
(1,'RN-000')
update #table
Set Code= Code + cast(RequisitionID as varchar(1)) + '/2017'
Where RequisitionID = 1
select * from #table
+---------------+--------------+
| RequisitionID | Code |
+---------------+--------------+
| 1 | RN-0001/2017 |
+---------------+--------------+

Related

SQL pivot table1 into table2 without using dynamic SQL pivot or hardcode query

I have seen many questions and answers given about pivoting table with SQL, with dynamic SQL pivot or hard code query with CASE WHEN.
However is there any way I can pivot table without using those 2?
Table 1:
| col1 | col2 | col3 |
|--------|-------|--------|
| ABCD | 1 | XY123 |
| ABCD | 2 | RT789 |
| PQST | 3 | XY123 |
| PQST | 4 | RT789 |
Pivoting to
| col1 | ABCD | PQST |
|--------|-------|-------|
| XY123 | 1 | 3 |
| RT789 | 2 | 4 |
My idea was to retrieve the structure of the col with:
WITH
structure AS (
SELECT DISTINCT
col3 AS col1, col1 AS colName, col2 AS values
FROM table1 ori
)
and then extracting matched values of each cell with joins and storing them temporarily. At last JOIN again populating them in the output. However I am stuck after the above step. I can't use PIVOT and have to do this dynamically (i.e. can't use the method to hardcode each value with CASE WHEN)
How can I achieve this?
This is not as efficient (and not as easy to code) as a dynamic pivot. However, it is doable.
It does all need to be dynamic e.g., creating each SQL statement as a string and executing that.
The process involves
Determine the column names (store in a temporary table)
Creating the table with the first column only
Populating that first column
For each additional column name
Adding a column to the table (dynamically)
Populating that column with data
You haven't specified the database - I'll illustrate the following below using SQL Server/T-SQL.
The following are in this db<>fiddle so you can see what's going on.
CREATE TABLE #ColNames (ColNum int IDENTITY(1,1), ColName nvarchar(100), ColNametxt nvarchar(100));
INSERT INTO #ColNames (ColName, ColNametxt)
SELECT DISTINCT QUOTENAME(Col1), Col1
FROM table1;
This will populate the #ColNames table with the values 1, [ABCD], ABCD, 2, [PQST], PQST.
The next step is to create your output table - I'll call it #pvttable
CREATE TABLE #pvttable (col1 nvarchar(100) PRIMARY KEY);
INSERT INTO #pvttable (col1)
SELECT DISTINCT Col3
FROM table1;
This creates your table with 1 column (col1) with values XY123 and RT789).
The write your favorite loop (e.g., cursor, while loop). In each step
Get the next column name
Add the column to the table
Update that column with appropriate data
e.g., the following is an illustrative example with your data.
DECLARE #CustomSQL nvarchar(4000);
DECLARE #n int = 1;
DECLARE #ColName nvarchar(100);
DECLARE #ColNametxt nvarchar(100);
SELECT #ColName = ColName,
#ColNameTxt = ColNameTxt
FROM #ColNames
WHERE ColNum = #n;
WHILE #ColName IS NOT NULL
BEGIN
SET #CustomSQL = N'ALTER TABLE #pvttable ADD ' + #ColName + N' nvarchar(100);';
EXEC (#CustomSQL);
SET #CustomSQL =
N'UPDATE #pvttable SET ' + #Colname + N' = table1.col2'
+ N' FROM #pvttable INNER JOIN table1 ON #pvttable.col1 = table1.col3'
+ N' WHERE table1.col1 = N''' + #ColNametxt + N''';';
EXEC (#CustomSQL);
SET #n += 1;
SET #ColName = NULL;
SET #ColNametxt = NULL;
SELECT #ColName = ColName,
#ColNameTxt = ColNameTxt
FROM #ColNames
WHERE ColNum = #n;
END;
SELECT * FROM #pvttable;

Fetch specific column and row data based on row values from another table in SQL server

Does anyone know if it is possible to fetch data from a column in a table based on row values from another table?
e.g.
Table 1:
Name| Date
----|-----
Bob | D1
Jon | D2
Stu | D3
Amy | D4
Table 2:
Date |Bob |Jon |Stu |Amy
-----|----|----|----|----
D1 | A | B | C | D
D2 | B | C | D | A
D3 | C | D | A | B
D4 | D | A | B | C
I need to match the date but bring through the correct letter for each name
So Table 3 would be:
Name| Date | Letter
----|------|-------
Bob | D1 | A
Jon | D2 | C
Stu | D3 | A
Amy | D4 | C
Any suggestions are welcome.
thanks
If you are looking for way without column hardcodes, you can try this.
Lets your tables has names #table1, #table2. Then:
select
[Name] = [t1].[Name]
,[Date] = [t1].[Date]
,[Letter] = [col2].[value]('.', 'varchar(10)')
from
#table1 as [t1]
cross apply
(
select [t2_xml] = cast((select * from #table2 for xml path('t2')) as xml)
) as [t2]
cross apply
[t2].[t2_xml].[nodes]('t2[Date/text()=sql:column("[t1].[Date]")]') as [tab]([col])
cross apply
[col].[nodes]('*[local-name(.)=sql:column("[t1].[Name]")]') as [tab2]([col2]);
There are many ways to achieve the desired output. My solution uses a combination of cursors and dynamic TSQL.
Here is the code, commented step by step:
--1. create test tables
create table table1 ([Name] nvarchar(50),[Date] nvarchar(50))
create table table2 ([Date] nvarchar(50),Bob nvarchar(50),Jon nvarchar(50),Stu nvarchar(50),Amy nvarchar(50))
create table table3 ([Name] nvarchar(50),[Date] nvarchar(50),[Letter] nvarchar(50))
--2. populate test tables
insert into table1
select 'Bob','D1'
union all select 'Jon','D2'
union all select 'Stu','D3'
union all select 'Amy','D4'
insert into table2
select 'D1','A','B','C','D'
union all select 'D2','B','C','D','A'
union all select 'D3','C','D','A','B'
union all select 'D4','D','A','B','C'
--3. declare variables
DECLARE #query NVARCHAR(max); --this variable will hold the dynamic TSQL query
DECLARE #name NVARCHAR(50);
DECLARE #date NVARCHAR(50);
DECLARE #result NVARCHAR(50) --this variable will hold "letter" value returned by dynamic TSQL query
DECLARE #testCursor CURSOR;
--4. define the cursor that will scan all rows in table1
SET #testCursor = CURSOR FOR SELECT [Name], [Date] FROM table1;
OPEN #testCursor;
FETCH NEXT FROM #testCursor INTO #name, #date;
WHILE ##FETCH_STATUS = 0
BEGIN
--5. for each row in table 1 create a dynamic query that retrieves the correct "Letter" value from table2
set #query = 'select #res=' + #name + ' from table2 where [Date] =''' + #date +''''
--6. executes dynamic TSQL query saving result in #result variable
EXECUTE sp_executesql #query, N'#res nvarchar(50) OUTPUT', #res=#result OUTPUT
--inserts data in table3 that holds final results
insert into table3 select #name, #date, #result
FETCH NEXT FROM #testCursor INTO #name, #date;
END
CLOSE #testCursor;
DEALLOCATE #testCursor;
select * from table1
select * from table2
select * from table3
Here are the results. The first two tables show the inputs, the third table contains the actual results:

Transpose rows into columns SQL Server 2014

I have a CSV file with 51 columns.
In this order, there is a UID Column, Serial Column, Date column and 48 columns for each 30 minute segment of the day (from 00:30 through to 00:00). Each day has a new row.
So it looks like:
UID | Serial | Date | Val_0030 | Val_0100 | Val_0130 | ..... | Val_0000
123 | 123456 | 2016-01-02 | 56.2 | 23.25 | 32.8 | ..... | 86.23
I need to transpose this data into 4 columns, so that each half hour has a UID, Serial and Date column. In other words I need to run down instead of across.
To look like this:
UID | Serial | 2016-01-02 00:30 | Value
Rather that each day having a new row as it currently does, I will determine that Val_0130 for example will determine that the time is 01:30 and will concat with the date
I have tried using pivot and unpivot without any success. Can anyone advise the best approach to do this.
I would use UNPIVOT and then cut up the column name Val_0130 to add to datetime to get the desired result. This way you will only have to write the 48 columns in one spot.
here is some test data:
DECLARE #Table AS TABLE (UID INT, Serial INT, Date DATETIME, Val_0030 MONEY, Val_0100 MONEY, Val_0130 MONEY, Val_0000 MONEY)
INSERT INTO #Table (UID, Serial, Date, Val_0030, Val_0100, Val_0130, Val_0000)
VALUES
(123, 123456, '2016-01-02',56.2,23.25,12.34,86.23)
,(231, 234561, '2016-01-05',26.2,13.25,23.45,106.23)
,(312, 345612, '2016-01-07',76.2,3.25,34.56,1010.56)
And the Query
SELECT
UID
,Serial
,DateWithTime = [Date] + CAST((SUBSTRING(ColumnNames,5,2) + ':' + RIGHT(ColumnNames,2)) AS DATETIME)
,Value
FROM
#Table t
UNPIVOT (
Value
FOR ColumnNames IN (Val_0030, VAL_0100, Val_0130, VAL_0000)
) u
And if you don't want to type out all 48 columns, like I wouldn't want to, just run this query and copy and past the result into the ColumnNames IN () section of the above query.
DECLARE #ColString VARCHAR(MAX) = ''
DECLARE #DT DATETIME = '00:00'
WHILE #DT < '1900-01-02 00:00:00.000'
BEGIN
IF LEN(#ColString) > 0
BEGIN
SET #ColString += ','
END
SET #ColString += 'Val_' + FORMAT(#DT,'HHmm')
SET #DT = DATEADD(MINUTE,30,#DT)
END
SELECT #ColString
Matt provides a good answer with UNPIVOT. On platforms where that's not an option, you can get the same effect using a cross join and a case statement. Create a table of half-hours, and produce the values with
select ...
, case hh.time when '00:00' then VAL_0000
when '00:30' then Val_0030
when '01:00' then Val_0100
when '01:30' then Val_0130
...
end as Value
from data cross join "half-hours" as hh

Find a value from all table columns

I am building a functionality that will filter a data on all column table.
Let's say I have this table:
-------------------------------------
| ID | NAME | Address | Remarks |
| 1 | Manny | Phil | Boxer-US |
| 2 | Timothy | US | Boxer |
| 3 | Floyd | US | Boxer |
| 4 | Maidana | US | Boxer |
| 5 | Marquez | MEX | Boxer |
-------------------------------------
I search for "US", it should give me IDs 1-4 since "US" exists in their columns.
I could have this to filter it:
SELECT ID FROM tbl_Boxers
WHERE ID LIKE '%US%' OR NAME LIKE '%US%' OR Address LIKE '%US%' OR Remarks LIKE '%US%'
But I'm trying to avoid a long WHERE clause here since in actual, I have around 15 columns to look at.
Is there any other way to minimize the where clause?
Please help.
Thanks
The way this is normally done is to make a 'Searchable Field' column where you concatinate all search columns into one field for searching.
So while that provides an easier overview and querying it adds some management of the data you need to be aware off on inserts and updates.
Also on its own - it's not an optimal way of searching, so if performance is important - then you should look to implement full text search.
So the question is where you want to have the 'overhead' and whether that functionality your building is going to be run often or just once in a while.
If it is the former, and performance is important - look to full text. If it is just a once-in-a-while query, then I'd properly just do the long WHERE clause myself to avoid adding more overhead on the maintenance of the data
Check the following solution. Here the query is generated dynamically based on the column names in your table.
This is applicable if the given table is a physical table. This solution wont work for temporary tables or table variables.
BEGIN TRAN
--Simulate your table structure
--Should be a physical table. Cannot be a temp table or a table variable
CREATE TABLE TableA
(
ID INT,
NAME VARCHAR(50),
ADDRESS VARCHAR(50),
REMARKS VARCHAR(50)
)
--Added values for testing
INSERT INTO TableA(ID, name , address ,remarks) VALUES(1,'Manny','Phil','Boxer-US')
INSERT INTO TableA(ID, name , address ,remarks) VALUES(2,'Timothy','US','Boxer')
INSERT INTO TableA(ID, name , address ,remarks) VALUES(3,'Floyd','US','Boxer')
INSERT INTO TableA(ID, name , address ,remarks) VALUES(4,'Maidana','US','Boxer')
INSERT INTO TableA(ID, name , address ,remarks) VALUES(5,'Marquez',' MEX','Boxer')
--Solution Starts from here
DECLARE #YourSearchValue VARCHAR(50)--Will be passed
SET #YourSearchValue = 'US' --Simulated passed value
CREATE TABLE #TableCols
(
ID INT IDENTITY(1,1),
COLUMN_NAME VARCHAR(1000)
)
INSERT INTO #TableCols
(COLUMN_NAME)
SELECT COLUMN_NAME
FROM information_schema.columns
WHERE table_name = 'TableA';
DECLARE #STARTCOUNT INT, #MAXCOUNT INT, #COL_NAME VARCHAR(1000), #QUERY VARCHAR(8000), #SUBQUERY VARCHAR(8000)
SELECT #STARTCOUNT = 1, #MAXCOUNT = MAX(ID) FROM #TableCols;
SELECT #QUERY = '', #SUBQUERY = ''
WHILE(#STARTCOUNT <= #MAXCOUNT)
BEGIN
SELECT #COL_NAME = COLUMN_NAME FROM #TableCols WHERE ID = #STARTCOUNT;
SET #SUBQUERY = #SUBQUERY + ' CONVERT(VARCHAR(50), ' + #COL_NAME + ') LIKE ''%' + #YourSearchValue + '%''' + ' OR ';
SET #STARTCOUNT = #STARTCOUNT + 1
END
SET #SUBQUERY = LEFT(#SUBQUERY, LEN(#SUBQUERY) - 3);
SET #QUERY = 'SELECT * FROM TableA WHERE 1 = 1 AND (' + #SUBQUERY + ')'
--PRINT (#QUERY);
EXEC (#QUERY);
ROLLBACK
Hope this helps.

Insert column value depending on an autoincrement column

I have table in my SQL Server database with following columns
ID | NAME | DOB | APPLICATION NO |
1 | JOHN | 31/05/1986 | KPL\2014\1 |
2 | MARY | 26/04/1965 | KPL\2014\2 |
3 | VARUN | 15/03/1972 | KPL\2014\3 |
Here column ID is an auto increment column and column APPLICATION NO is dependent on ID.
That means APPLICATION NO is the concatenation of KPL\, YEAR and column value of ID.
Then how can I make an insert query?
Why don't you use a computed column ?
I would change the table's definition.
add a "year" column
add an "application_name" column (if it's not always "KPL").
then create your computed column, with the needed fields
alter table <yourTable> add computed_application_name as (application_name + '/' + CAST(<yearColumn> as VARCHAR(4) + '/' + <otherColumn> + CAST(id as VARCHAR(MAX))
Just use a computed column:
alter table t
add application_no as ('KPL' + cast(year(getdate()) as varchar(255)) + cast(id as varchar(255));
It occurs to me that you really want the year when the row was inserted. For that purpose, I would recommend adding a createdat column and then using that instead of getdate():
alter table t add CreatedAt datetime default getdate();
If you have data in the table, then set the value (this is not needed if the table is empty):
update table t set CreatedAt = getdate();
Then define application_no:
alter table t
add application_no as ('KPL' + cast(year(CreatedAt) as varchar(255)) + cast(id as varchar(255));
Simply use a computed column for 'APPLICATION_NO':
create table tbl (
ID int,
NAME varchar(10),
DOB date,
APPLICATION_NO as ('KPL\'+cast(year(dob) as char(4))+'\'+cast(id as varchar))
)
You can further persist and index it as well.
You can create a trigger to run after insert. In your insert, ignore application_no. And your trigger update this column based on ID.
Something like this:
CREATE TRIGGER [TRIGGER_TABLE1] ON TABLE1
AFTER INSERT
AS
BEGIN
UPDATE TABLE1
SET APPLICATION_NO = 'KPL\' + CONVERT(VARCHAR, YEAR(GETDATE())) + '\' + CONVERT(VARCHAR, ID)
WHERE APPLICATION_NO IS NULL
END
EDIT: This way, you should be able to use other columns values as well. APPLICATION_NO just need to accept NULL values. So you control like this what will be updated or not.
Use SCOPE_IDENTITY get the inserted record Id
Declare #Id INT
Insert Into TABLENAME(NAME ,DOB )
Values('Name','1/1/1990')
Set #Id=SCOPE_IDENTITY()
IF(#Id>0)
begin
declare #col varchar(100)=''
set #col= 'KPL\' + CONVERT(varchar(4),YEAR(GETDATE()))+'\' + CONVERT(varchar(4),#Id)
Update TABLENAME
SET APPLICATIONNO= #col
Where ID = #id
end
If you really want to do it by yourself with an insert statement:
assuming that you want to insert data from another table
and TABLE1 is the table you described in your question and TABLE2 would be the table you want to import data from:
DECLARE #MAXID INT
SELECT #MAXID = MAX(ID) FROM TABLE1
INSERT INTO TABLE1
(NAME,DOB,APPLICATION_NO)
SELECT
NAME,
DOB,
'KPL\'
+ CONVERT(CHAR(4),YEAR(GETDATE()) +'\'
+ CONVERT(VARCHAR(25), #MAXID + ROW_NUMBER() OVER (ORDER BY NAME)) AS APPLICATION_NO
FROM TABLE2