SQL Server table - Update Order by - sql

I have a SQL Server table with fields: id, city, country. I imported this table from Excel file, everything is imported successfully, but id field is not ordered by number. The tool I use imported the rows in some random number.
What kind of Update command I should use from SQL Server Management Studio Express to re-order my ids?

Do you have a primary key and a clustered index on your table? If not, id is a good candidate for a primary key and when you create that the primary key it will be the clustered index.
Assuming this is your table
create table CityCountry(id int, city varchar(10), country varchar(10))
And you add data like this.
insert into CityCountry values (2, '2', '')
insert into CityCountry values (1, '1', '')
insert into CityCountry values (4, '4', '')
insert into CityCountry values (3, '3', '')
The output of select * from CityCountry will be
id city country
----------- ---------- ----------
2 2
1 1
4 4
3 3
A column that is primary key can not accept null values so first you have to do
alter table CityCountry alter column id int not null
Then you can add the primary key
alter table CityCountry add primary key (id)
When you do select * from CityCountry now you get
id city country
----------- ---------- ----------
1 1
2 2
3 3
4 4

Just use the order by part of the select statement to order them.

If I understood you correctly, you want all the ids to have consecutive numbers 1,2,3,4...
Image your table contents is:
select *
from yourTable
id city country
----------- ---------- ----------
1 Madrid Spain
3 Lisbon Portugal
7 Moscow Russia
10 Brasilia Brazil
(4 row(s) affected)
To reorder the ids, just run this:
declare #counter int = 0
update yourTable
set #counter = id = #counter + 1
(4 row(s) affected)
You can now check, that indeed all the ids are reordered:
select *
from yourTable
id city country
----------- ---------- ----------
1 Madrid Spain
2 Lisbon Portugal
3 Moscow Russia
4 Brasilia Brazil
(4 row(s) affected)
However, you need to be careful with this. If some table has a Foreign key to this id column, then you need first to disable that FK, update this table, update the values in other tables that have FK's pointing to yourTable finally enable again the FKs

First, I think you may have some misconceptions about the purpose of the Id column. The Id column is probably a surrogate key; i.e. an arbitrary value that is unique and non-null that is never shown to the user. Thus, it should not be implied to have any inherit meaning or sequence. In fact, you should always have another column or columns that are marked as being unique to represent a "business key" or a set of values that are unique to the user. In your case, city, country should probably be unique (although you will likely need to add province or state as it is common to have the same city exist in the same country multiple times.)
Now, that said, it is possible to re-sequence your Ids if the following are true:
The Id column is not an identity column. Since this was from an import, I'm going to guess this is true.
There does not exist a relationship to the table where Cascade Update is not enabled.
You are using SQL Express 2005 or later:
Update MyTable
Set Id = T2.NewId
From (
Select Id
, Row_Number() Over ( Order By Id ) As NewId
From MyTable
) As T1
Join MyTable As T2
On T2.Id = T1.Id

Related

Own id for every unique name in the table?

Is it possible to make a table that has like auto-incrementing id's for every unique name that I make in the table?
For example:
ID NAME_ID NAME
----------------------
1 1 John
2 1 John
3 1 John
4 2 Mary
5 2 Mary
6 3 Sarah
7 4 Lucas
and so on.
Use the window function rank() to get a unique id per name. Or dense_rank() to get the same without gaps:
SELECT id, dense_rank() OVER (ORDER BY name) AS name_id, name
FROM tbl;
I would advise not to write that redundant information to your table. You can generate that number on the fly. Or you shouldn't store name redundantly in that table, name would typically live in another table, with name_id as PRIMARY KEY.
Then you have a "names" table and run "SELECT or INSERT" there to get a unique name_id for every new entry in the main table. See:
Is SELECT or INSERT in a function prone to race conditions?
First add the column to the table.
ALTER TABLE yourtable
ADD [UID] INT NULL;
``
ALTER TABLE yourtable
ADD constraint fk_yourtable_uid_id foreign key ([UID]) references yourtable([Serial]);
Then you can update the UID with the minimum Serial ID per Name.
UPDATE t
SET [UID] = q.[UID]
FROM yourtable t
JOIN
(
SELECT Name, MIN([Serial]) AS [UID]
FROM yourtable
GROUP BY Name
) q ON q.Name = t.Name
WHERE (t.[UID] IS NULL OR t.[UID] != q.[UID]) -- Repeatability

Add column to ensure composite key is unique

I have a table which needs to have a composite primary key based on 2 columns (Material number, Plant).
For example, this is how it is currently (note that these rows are not unique):
MATERIAL_NUMBER PLANT NUMBER
------------------ ----- ------
000000000000500672 G072 1
000000000000500672 G072 1
000000000000500672 G087 1
000000000000500672 G207 1
000000000000500672 G207 1
However, I'll need to add the additional column (NUMBER) to the composite key such that each row is unique, and it must work like this:
For each MATERIAL_NUMBER, for each PLANT, let NUMBER start at 1 and increment by 1 for each duplicate record.
This would be the desired output:
MATERIAL_NUMBER PLANT NUMBER
------------------ ----- ------
000000000000500672 G072 1
000000000000500672 G072 2
000000000000500672 G087 1
000000000000500672 G207 1
000000000000500672 G207 2
How would I go about achieving this, specifically in SQL Server?
Best Regards!
SOLVED.
See below:
SELECT MATERIAL_NUMBER, PLANT, (ROW_NUMBER() OVER (PARTITION BY MATERIAL_NUMBER, PLANT ORDER BY VALID_FROM)) as NUMBER
FROM Table_Name
Will output the table in question, with the NUMBER column properly defined
Suppose this is actual table,
create table #temp1(MATERIAL_NUMBER varchar(30),PLANT varchar(30), NUMBER int)
Suppose you want to insert only single record then,
declare #Num int
select #Num=isnull(max(number),0) from #temp1 where MATERIAL_NUMBER='000000000000500672' and PLANT='G072'
insert into #temp1 (MATERIAL_NUMBER,PLANT , NUMBER )
values ('000000000000500672','G072',#Num+1)
Suppose you want to insert bulk record.Your bulk record sample data is like
create table #temp11(MATERIAL_NUMBER varchar(30),PLANT varchar(30))
insert into #temp11 (MATERIAL_NUMBER,PLANT)values
('000000000000500672','G072')
,('000000000000500672','G072')
,('000000000000500672','G087')
,('000000000000500672','G207')
,('000000000000500672','G207')
You want to insert `#temp11` in `#temp1` maintaining number id
insert into #temp1 (MATERIAL_NUMBER,PLANT , NUMBER )
select t11.MATERIAL_NUMBER,t11.PLANT
,ROW_NUMBER()over(partition by t11.MATERIAL_NUMBER,t11.PLANT order by (select null))+isnull(maxnum,0) as Number from #temp11 t11
outer apply(select MATERIAL_NUMBER,PLANT,max(NUMBER)maxnum from #temp1 t where t.MATERIAL_NUMBER=t11.MATERIAL_NUMBER
and t.PLANT=t11.PLANT group by MATERIAL_NUMBER,PLANT) t
select * from #temp1
drop table #temp1
drop table #temp11
Main question is Why you need number column ? In mot of the cases you don't need number column,you can use ROW_NUMBER()over(partition by t11.MATERIAL_NUMBER,t11.PLANT order by (select null)) to display where you need. This will be more efficient.
Or tell the actual situation and number of rows involved where you will be needing Number column.

How can I insert records from a table to another table order by the specific column value?

Does anyone know how can I insert records from a table to another table order by the specific column value?
For Example:
I have the following table:
tableA:
record_id int,
name varchar(100),
nickname(100),
chain_id int (PK),
chain_n int,
count int,
create_date datetime
tableB:
record_id int,
name varchar(100),
nickname(100),
chain_id int (PK),
chain_n int,
create_date datetime
I have the following value for tableA:
record_id name nickname chain_id chain_n count create_date
1 Test One 1 1 2 2013-06-06
2 Test Two 2 1 5 2013-06-06
3 Test Three 3 1 3 2013-06-06
I using the following scrip to insert the data into thableB
INSERT INTO tableB
(
record_id,
name,
nickname,
chain_id,
chain_n,
create_date
)
SELECT
(
record_id,
name,
nickname,
chain_id,
chain_n,
create_date
)
FROM tableA
ORDER BY count DESC
I expecting the data will insert into tableB like the following:
record_id name nickname chain_id chain_n create_date
2 Test Two 2 1 2013-06-06
3 Test Three 3 1 2013-06-06
1 Test One 1 1 2013-06-06
However, the result was as following still order by the chain_id
record_id name nickname chain_id chain_n create_date
1 Test One 1 1 2013-06-06
2 Test Two 2 1 2013-06-06
3 Test Three 3 1 2013-06-06
Does anyone know how can manage to insert the records order by the count instead
It seems that record_id is your primary key and thus default ordering is done by that. That's why your output is same as in tableA. Just use ORDER BY in SELECT clause for tableB.
Inserting records into a table in particular order doesn't make much sense to me becuase ORDER BY doesn't actually influence the way the data is written to your drive. All records will be inserted in the order of your clustered index. In any case if you want to query data from a table ordered by some field you should explicitly state it in the ORDER BY clause even if you want your data ordered by clustered index columns. Although all data is meant to be ordered by clustered index by default it's still up to SQL Server engine to define the best execution plan and change the order if required if you don't specify it explicitly in the ORDER BY clause.

insert data from one table to another

I have 2 different tables but the columns are named slightly differently.
I want to take information from 1 table and put it into the other table. I need the info from table 1 put into table 2 only when the "info field" in table 1 is not null. Table 2 has a unique id anytime something is created, so anything inserted needs to get the next available id number.
Table 1
category
clientLastName
clientFirstName
incidentDescription
info field is not null then insert all fields into table 2
Table 2
*need a unique id assigned
client_last_name
client_first_name
taskDescription
category
This should work. You don't need to worry about the identify field in Table2.
INSERT INTO Table2
(client_last_name, client_first_name, taskDescription, category)
(SELECT clientLastName, clientFirstName, incidentDescription, category
FROM Table1
WHERE info_field IS NOT NULL)
Member_ID nvarchar(255) primary key,
Name nvarchar(255),
Address nvarchar(255)
)
insert into Member(Member_ID,Name,Address) (select m.Member_Id,m.Name,m.Address from library_Member m WHERE Member_Id IS NOT NULL)

Updating Uncommitted data to a cell with in an UPDATE statement

I want to convert a table storing in Name-Value pair data to relational form in SQL Server 2008.
Source table
Strings
ID Type String
100 1 John
100 2 Milton
101 1 Johny
101 2 Gaddar
Target required
Customers
ID FirstName LastName
100 John Milton
101 Johny Gaddar
I am following the strategy given below,
Populate the Customer table with ID values in Strings Table
INSERT INTO CUSTOMERS SELECT DISTINCT ID FROM Strings
You get the following
Customers
ID FirstName LastName
100 NULL NULL
101 NULL NULL
Update Customers with the rest of the attributes by joining it to Strings using ID column. This way each record in Customers will have corresponding 2 matching records.
UPDATE Customers
SET FirstName = (CASE WHEN S.Type=1 THEN S.String ELSE FirstName)
LastName = (CASE WHEN S.Type=2 THEN S.String ELSE LastName)
FROM Customers
INNER JOIN Strings ON Customers.ID=Strings.ID
An intermediate state will be llike,
ID FirstName LastName ID Type String
100 John NULL 100 1 John
100 NULL Milton 100 2 Milton
101 Johny NULL 101 1 Johny
101 NULL Gaddar 101 2 Gaddar
But this is not working as expected. Because when assigning the values in the SET clause it is setting only the committed values instead of the uncommitted. Is there anyway to set uncommitted values (with in the processing time of query) in UPDATE statement?
PS: I am not looking for alternate solutions but make my approach work by telling SQL Server to use uncommitted data for UPDATE.
The easiest way to do it would be to split the update into two:
UPDATE Customers
SET FirstName = Strings.String
FROM Customers
INNER JOIN Strings ON Customers.ID=Strings.ID AND Strings.Type = 1
And then:
UPDATE Customers
SET LastName = Strings.String
FROM Customers
INNER JOIN Strings ON Customers.ID=Strings.ID AND Strings.Type = 2
There are probably ways to do it in one query such as a derived table, but unless that's a specific requirement I'd just use this approach.
Have a look at this, it should avoid all the steps you had
DECLARE #Table TABLE(
ID INT,
Type INT,
String VARCHAR(50)
)
INSERT INTO #Table (ID,[Type],String) SELECT 100 ,1 ,'John'
INSERT INTO #Table (ID,[Type],String) SELECT 100 ,2 ,'Milton'
INSERT INTO #Table (ID,[Type],String) SELECT 101 ,1 ,'Johny'
INSERT INTO #Table (ID,[Type],String) SELECT 101 ,2 ,'Gaddar'
SELECT IDs.ID,
tName.String NAME,
tSur.String Surname
FROM (
SELECT DISTINCT ID
FROM #Table
) IDs LEFT JOIN
#Table tName ON IDs.ID = tName.ID AND tName.[Type] = 1 LEFT JOIN
#Table tSur ON IDs.ID = tSur.ID AND tSur.[Type] = 2
OK, i do not think that you will find a solution to what you are looking for. From UPDATE (Transact-SQL) it states
Using UPDATE with the FROM Clause
The results of an UPDATE statement are
undefined if the statement includes a
FROM clause that is not specified in
such a way that only one value is
available for each column occurrence
that is updated, that is if the UPDATE
statement is not deterministic. For
example, in the UPDATE statement in
the following script, both rows in
Table1 meet the qualifications of the
FROM clause in the UPDATE statement;
but it is undefined which row from
Table1 is used to update the row in
Table2.
USE AdventureWorks;
GO
IF OBJECT_ID ('dbo.Table1', 'U') IS NOT NULL
DROP TABLE dbo.Table1;
GO
IF OBJECT_ID ('dbo.Table2', 'U') IS NOT NULL
DROP TABLE dbo.Table2;
GO
CREATE TABLE dbo.Table1
(ColA int NOT NULL, ColB decimal(10,3) NOT NULL);
GO
CREATE TABLE dbo.Table2
(ColA int PRIMARY KEY NOT NULL, ColB decimal(10,3) NOT NULL);
GO
INSERT INTO dbo.Table1 VALUES(1, 10.0), (1, 20.0), (1, 0.0);
GO
UPDATE dbo.Table2
SET dbo.Table2.ColB = dbo.Table2.ColB + dbo.Table1.ColB
FROM dbo.Table2
INNER JOIN dbo.Table1
ON (dbo.Table2.ColA = dbo.Table1.ColA);
GO
SELECT ColA, ColB
FROM dbo.Table2;
Astander is correct (I am accepting his answer). The update is not happening because of a read UNCOMMITTED issue but because of the multiple rows returned by the JOIN. I have verified this. UPDATE picks only the first row generated from the multiple records to update the original table. This is the behavior for MSSQL, Sybase and such RDMBMSs but Oracle does not allow this kind of an update an d it throws an error. I have verified this thing for MSSQL.
And again MSSQL does not support updating a cell with UNCOMMITTED data. Don't know the status with other RDBMSs. And I have no idea if anyRDBMS provides with in the query ISOLATION level management.
An alternate solution will be to do it in two steps, Aggregate to unpivot and then insert. This has lesser scans compared to methods given in above answers.
INSERT INTO Customers
SELECT
ID
,MAX(CASE WHEN Type = 1 THEN String ELSE NULL END) AS FirstName
,MAX(CASE WHEN Type = 2 THEN String ELSE NULL END) AS LastName
FROM Strings
GROUP BY ID
Thanks to my friend Roji Thomas for helping me with this.