I have the table tFile in my database, I want to loop through the table and update filename as shown in this example:
id fileId filename
-------------------------
231 555 Null
123 444 Null
572 732 Null
I want to update the filename to be the name(fileId) as shown here:
id fileId filename
----------------------------
231 555 test(555)
123 444 test(444)
572 732 test(732)
I wrote a SQL script that update just one filename according to writing it manually but I need to update all filename using loop. I think I have to do nested loop one to loop through fileId and other to loop through id.
But I'm sorry I have no experience about this. I need help please.
update tFile
set filename = 'test'+ '('+fileId+')';
where id in (231)
As per your expected output, I've created a sample table please have a look.
DECLARE #tFile TABLE
(
Id int IDENTITY(1,1),
fileId INT,
filename NVARCHAR(50)
)
INSERT INTO #tFile VALUES(555,NULL),(444,NULL),(732,NULL)
SELECT *,CONCAT('TEST(',fileId,')') AS [FileNameUPdate] FROM #tFile
UPDATE #tFile
SET filename =CONCAT('TEST(',fileId,')')
SELECT * FROM #tFile
/*Added tResultTable*/
DECLARE #tResult TABLE
(
fileId INT,
filename NVARCHAR(50)
)
INSERT INTO #tResult(fileId,filename) --Insert query
SELECT fileId,filename FROM #tFile --Select query
SELECT * FROM #tResult
Output
Related
Following Situation:
I have a dynamically (by columns) builded table ... eg:
Rowid UniqueID Name Birthdate Town ....
1 null Joe Jan-93 Cologne
2 null Nick Okt-00 London
I am building this TempTable to create an uniqueID for all Data in my DataBase
The TempTable was created by two loops which run through all my DataBase Table & Columns and copy all primary key Data to this TempTable.
My aim is to update the UniqueID Column from my TempTable with the concat values of data ... eg:
UniqueID
JoeJan-93Cologne
NickOkt-00London
Do you have an idea how to update UniqueID ?
What I m thinking about is:
Loop 1 going through all Tables
Select Table of Schema
Loop 2 going through all Columns of Table
Select Column of Schema
Copy Column to my Temp
-- here an update like ... set UniqueID = select concat(UniqueID, #Column)
-- from #table where RowID = RowID
End loop 2
end loop 1
Is this possible
Or do I have to open a third loop which is running through all rows and concat values ?
You can try this
Update <YourTableName>
set UniqueId = ISNULL(Name, '') + ISNULL(Cast(Birthdate as Varchar(10), '') + ISNULL(Town, '')
You can use CONCAT() with UPDATE statement, no any loop required :
UPDATE t
SET UniqueID = CONCAT(Name, Birthdate, Town);
I have the following T-SQL script:
declare #Name nvarchar
declare data cursor for
select Name from MyDB.dbo.MyTable;
OPEN data;
-- Perform the first fetch.
FETCH NEXT FROM data;
-- Check ##FETCH_STATUS to see if there are any more rows to fetch.
WHILE ##FETCH_STATUS = 0
BEGIN
-- This is executed as long as the previous fetch succeeds.
FETCH NEXT FROM data INTO #Name;
Print 'Name: ' + #Name
END
CLOSE data;
DEALLOCATE data;
GO
I want to make a script that will compare each of the strings in a first column with each of the strings in the second column.
The problem is, I don't know how to loop through each of the rows and take a separate string value.
The code above prints only the first value in the query result.
What am I doing wrong?
To compare all values from one column to all values in another column you don't need a cursor, a simple join will do the work - since you didn't provide sample data and also not desired results, I had to make my own:
Create and populate sample table (Please save us this step in your future questions)
CREATE TABLE MyTable
(
Id int identity(1,1),
Name1 char(3),
Name2 char(3)
)
INSERT INTO MyTable (Name1, Name2) VALUES
('abc','def'),('zyx','abc'),
('ghi','jkl'),('yza','ghi'),
('mno','pqr'),('nml','mno'),('pqr','qpo'),
('stu','vwx'),('wvu','tsr'),('kji','hgf')
The query:
SELECT T1.Id, T1.Name1, T1.Name2, T2.Id, T2.Name1, T2.Name2
FROM MyTable T1
JOIN MyTable T2 ON T1.Name1 = T2.Name2
Result:
Id Name1 Name2 Id Name1 Name2
1 abc def 2 zyx abc
3 ghi jkl 4 yza ghi
5 mno pqr 6 nml mno
7 pqr qpo 5 mno pqr
You probably don't want to use a Cursor.
Are your columns in the same table? If so this is as simple as this;
-- Show All rows with [DIFFERENT] Name and Name2 fields
SELECT
Name,
Name2
FROM [MyDB].[dbo].[MyTable]
WHERE
Name <> Name2
-- Show All rows with [SAME] Name and Name2 fields
SELECT
Name,
Name2
FROM [MyDB].[dbo].[MyTable]
WHERE
Name = Name2
If not you will need to post the table definitions and names of columns to get a more concrete example
I have a temp table with 2 columns, each column is a parameter I've declared. I've done so using this sql.
Declare
#SourceKey varchar(40) = '1109'
,#Department Key varchar(1500) = '14,55
The table is then populated using the following sql:
if OBJECT_ID('Tempdb..#Department','U') is not null
drop table #Department
CREATE TABLE #Department
(DepartmentKey int
,BaseTerm varchar(5))
INSERT INTO #Department
SELECT value
,skt.Key from YYY.ParseList(#Department,',')
join #SourceKeyTable skt
on skt.Key = skt.key
If I select * From #Department I get these results:
Department Key | SourceKey
14 | 1109
55 | 1109
Thats what I expect. I then join the temp table to my main query like so
JOIN #Department d
on Table.rKey = d.DepartmentKey
I need to have a temp table to allow for a multi-select in the visual studio report. However, with the department key equal to 14 AND 55 its skewing my results. I need 1 value passed 14 OR 55 not both. But the temp table is neccessary for the multi-select.
Any suggestions on how to pass only 1 value while still having set up mentioned previously?
I'll do my best to answer questions as I might not have explained this question well enough for some.
I reckon you need to parse your list into a temporary table or table variable and then do whatever needs to be done with it.
It's difficult to see from your code exactly what that would involve but the code below should illustrate the idea sufficiently.
I create a table variable. Insert the parsed list values and then cycle through them printing the values to output
--Create a table variable
DECLARE #Departments TABLE(DepartmentOrder int identity, RKey nvarchar(40) NOT NULL)
--Create variables to loop through table variable
DECLARE #DepartmentOrder int
DECLARE #RKey nvarchar(40)
--Populate table variable with the parsed list values
INSERT #Departments (RKey) SELECT RKey from YYY.ParseList(#Department,',')
--Get the first list entry
SELECT #DepartmentOrder = min(DepartmentOrder ) FROM #Departments
--While we've not reached the end
WHILE #DepartmentOrder IS NOT NULL
BEGIN
--Get the Department key for this entry
SET #RKey = (SELECT RKey FROM #Departments WHERE DepartmentOrder = #DepartmentOrder )
--Use the values
PRINT '#DepartmentOrder = '+CONVERT(nvarchar(9),#DepartmentOrder )
PRINT '#RKey = '''+#RKey +''''
--Get the next list entry
SET #DepartmentOrder = (SELECT MIN(DepartmentOrder ) FROM #Departments WHERE DepartmentOrder > #DepartmentOrder )
END
My Table Schema is as follows:
Gender: char(1), not null
Last Name: varchar(25), null
First Name: varhcar(35), not null
The data in the table looks like:
Gender | Last Name | First Name |
M Doe John
F Marie Jane
M Jones Jameson
F Simpson Alice
I now am trying to update all the names in the table from the names present in the txt file.
My Query is as follows:
-- Sort out the Forenames we'll be using for the data, we make a #Name2 table because I have yet to figure our
-- inserting specific columns using BULK INSERT and without using a format file.
CREATE TABLE #Name (Name VARCHAR(50))
CREATE TABLE #ForeNames (FirstName VARCHAR(50), Gender VARCHAR(1))
-- Move data in the #Name2 table
BULK INSERT #Name FROM "c:\girlsforenames.txt" WITH (ROWTERMINATOR='\n')
-- Now move it to the forename table and add the gender
INSERT INTO #ForeNames SELECT [Name], 'F' FROM #Name
-- Delete the names from temporary table
TRUNCATE TABLE #Name
-- Same for the boys
BULK INSERT #Name FROM "c:\boysforenames.txt" WITH (ROWTERMINATOR='\n')
INSERT INTO #ForeNames SELECT [Name], 'M' FROM #Name
-- Now do the surnames
TRUNCATE TABLE #Name
BULK INSERT #Name FROM "c:\surnames.txt" WITH (ROWTERMINATOR='\n')
DECLARE #Counter BIGINT
SET #Counter = 4
WHILE (#Counter > 0)
BEGIN
UPDATE TableName
set
[last_name]= (SELECT TOP 1 FirstName from #ForeNames),
[first_name]=(SELECT TOP 1 Name FROM #Name ORDER BY NEWID()),
[gender]= ( SELECT TOP 1 Gender FROM #ForeNames ORDER BY NEWID());
SET #Counter=#Counter-1
END
DROP TABLE #Name
DROP TABLE #ForeNames
SELECT * FROM TableName
What Happens is all the rows in the table are updated with the same values and each time i execute the query they are updated with the new set of values.
What I want is to loop through each row and update it and den update the next row with the other set of random name. But here it is updating the same random name across all the rows of the table.
Any help would be appreciated.
Each SELECT statement is only being executed once in your example (and thus returning 1 result), and since your UPDATE isn't being limited, you're applying the same value to every row.
If you want to update each row with different values, you can use a CTE and the ROW_NUMBER() function to update rows at a time.
There's no need to loop, you can do it in one fell swoop:
WITH cte AS (SELECT *,ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS n1
FROM TableName
)
UPDATE cte
SET FirstName = names.Name
FROM cte
JOIN (SELECT *,ROW_NUMBER() OVER (ORDER BY NEWID()) AS n2
FROM #name
)names
on cte.n1 = names.n2
Demo: SQL Fiddle
This example is just for the FirstName.
I am doing this:
, cte_proc_code (accn,proc_code) as
(SELECT accn_id,
(SELECT proc_code + ','
FROM [XDataFullExtract].[dbo].[accn_billed_procedures]
FOR XML PATH('')
)
FROM [XDataFullExtract].[dbo].[accn_billed_procedures]
group by accn_id)
My data looks like this:
accn_id,proc_code
AA123, 1132
AA123, 5234
AA123, 4524
BB123, 2345
BB123, 4444
The result that I would like is:
accn_id,proc_code
AA123, 1132, 5234, 4524
BB123, 2345, 4444
My solution works however IT'S WAY TOO SLOW!!
Is there a faster way to do this? I think the XML is slowing me down.
This approach involves adding a staging column to your table, but could run faster:
-- table, with new varchar(max) column cncat added
declare #t table(accn_id varchar(30), proc_code varchar(30), cncat varchar(max));
declare #concat varchar(max)=''; --staging variable
insert into #t values
('AA123','1132','')
, ('AA123','5234','')
, ('AA123','4524','')
, ('BB123','2345','')
, ('BB123','4444','');
-- update cncat
with cte as (select *,r=row_number()over(partition by accn_id order by proc_code) from #t)
update cte set #concat = cncat = case cte.r when 1 then '' else #concat end + ','+proc_code
-- results
select accn_id, cncat=stuff(max(cncat),1,1,'')
from #t
group by accn_id;
-- clean up (optional)
update #t set cncat='';
go
In the query you have provided it is not "the XML that is slowing you down".
You are building a comma separated string using all values in your table for every row returned.
You are missing a where clause in your sub-query that should filter your concatenated values to only use rows for the current row in the outer query.