I have finished all my changes to a database table in sql server management studio 2012, but now I have a large gap between some values due to editing. Is there a way to keep my data, but re-assign all the ID's from 1 up to my last value?
I would like this cleaned up as I populate dropdownlists with these values and then I make interactions with my database with the assumption that my dropdownlist index and the table's ID match up, which is not the case right now.
My current DB has a large gap from 7 to 28, I would like to shift everything from 28 and up, back down to 8, 9, 10, 11, ect... so that my database has NO gaps from 1 and onward.
If the solution is tricky please give me some steps as I am new to SQL.
Thank you!
Yes, there are any number of ways to "close the gaps" in an auto generated sequence. You say you're new to SQL so I'll assume you're also new to relational concepts. Here is my advice to you: don't do it.
The ID field is a surrogate key. There are several aspects of surrogates one must be mindful of when using them, but the one I want to impress upon you is,
-- A surrogate key is used to make the row unique. Other than the guarantee that
-- the value is unique, no other assumptions may be made concerning the value.
-- In particular, no meaning may be derived from the value as to the contents of
-- the row or the row's relationship to any other row.
You have designed your app with a built-in assumption of the value of the key field (that they will be consecutive). Already it is causing you problems. Do you really want to go through this every time you make changes to the table? And suppose a future feature requires you to filter out some of the choices according to an option the user has selected? Or enable the user to specify the order of the items? Not going to be easy. So what is the solution?
You can create an additional (non-visible) field in the dropdown list that contains the key value. When the user makes a selection, use that index to get the key value of the selection and then go out to the database and get whatever additional data you need. This will work if you populate the list from the entire table or just select a few according to some as yet unknown filtering criteria or change the order in any way.
Viola. You never have this problem again, no matter how often you add and remove rows in the table.
However, on the off chance that you are as stubborn as me (not likely!) or just refuse to listen to the melodious voice of reason and experience, then try this:
Create a new table exactly like the old table, including auto incrementing PK.
Populate the new table using a Select from the old table. You can specify any order you want.
Drop the old table.
Rename the new table to the old table name.
You will have to drop and redefine any FKs from other tables. But this entire process
can be placed in a script because if you do this once, you'll probably do it again.
Now all the values are consecutive. Until you edit the table again...
You should refactor the code for your dropdown list and not the PK of the table.
If you do not agree, you can do one of the following:
Insert another column holding the dropdown's "order of appearance", make a unique index on it and fill this by hand (or programmatically).
Replace the SERIAL with an INT would work, make a unique index on the column and fill this by hand (or programmatically).
Remove the large ids and reseed your serial - the code depending on your DBMS
This happens to me all the time. If you don't have any foreign key constraints then it should be an easy fix.
Remember a DELETE statement will remove the record but keep the identity seed the same. (If I remove id # 5 and #5 was the last record inserted then SQL server still stores the identity seed value at "6").
TRUNCATING the table will reset the identity seed back to it's original value.
INSERT_IDENTITY [TABLE] ON can also be used to insert the correct data in the correct order if tuncating cannot happen.
SELECT *
INTO #tempTable
FROM [TableTryingToFix]
TRUNCATE TABLE [TableTryingToFix];
INSERT INTO [TableTryingToFix] (COL1, COL2, COL3, ETC)
SELECT COL1, COL2, COL2, ETC
FROM #tempTable
ORDER BY oldTableID
Related
Is it possible to increase the value of a number in a column with a trigger every time it gets selected? We have special tables where we store the new id and when we update it in the app, it tends to get conflicts before the update happens, even when it all takes less than a second. So I was wondering if it is not possible to set database to increase value after every select on that column? Do not ask me why we do not use autoincrement for ids because I do not know.
Informix provides the SERIAL and BIGSERIAL types (and also SERIAL8, but don't use that) which provide autoincrement support. It also provides SEQUENCES with more sophisticated autoincrements. You should aim to use one of those.
Trying to use a SELECT trigger to update the table being selected from is, at best, fraught with problems about transactions and the like (problems which both the types and sequences carefully avoid).
If your design team needs help making effective use of these, ask a new question outlining what you want to achieve.
Normally, the correct way to proceed is to make the ID column in each table that defines 'something' (the Orders table, the Customer table, …) into a SERIAL column and either not insert a value into the ID column or insert 0 into it. The generated value can be retrieved and used when creating auxilliary information — order items, etc.
Note that you could think about using:
CREATE TABLE xyz_sequence
(
xyz SERIAL NOT NULL PRIMARY KEY
);
and using:
INSERT INTO xyz_sequence VALUES(0);
and then retrieving the inserted value — in Informix ESQL/C, you'd use sqlca.sqlerrd[1], in other languages, other techniques. You can also delete the newly inserted record, or even all the records in the table. You can afford to ignore errors from the DELETE statement; sooner or later, the rows will be deleted. The next value inserted will continue where the prior ones left off.
In a stored procedure, you'd use DBINFO('sqlca.sqlerrd1') to get the inserted value. You'd use DBINFO('bigserial') to get the value if you use a BIGSERIAL type.
I found out possible answer in this question update with return value instead of doing it with select it seems better to return value directly from update as update use locks it should be more safer even when you use multithreading application. But these are just my assumptions. Hopefully it will help someone.
while creating a tkinter application to store book information, I realize that simply deleting a row of information from the SQL database does not update the indexes. Kinda hard to explain but here is a picture of what I meant:
link to picture. (still young on this account, so pictures can't be embedded, sorry for the inconvenience)
As you can see, the first column represents the index and index 3 is missing because I deleted it. Is there a way such that upon deleting a row, anything below it just shifts up to cover for the empty spot?
Your use of the word "index" must be based on the application language, not the database language. In databases, indexes are additional data structures that speed certain operations on tables.
You are referring to an "id" column, presumably one that is defined automatically as identity, auto_increment, serial, or whatever the underlying database uses.
A very important point is that deleting a row from a table does not affect other rows in the table (unless you have gone through the work of writing triggers to make that happen). It just deletes the rows.
The second more important point is that you do not want to change the "identity" of rows -- and that is what the column you are calling an "index" is doing. It identifies the row. It not only identifies the row today, but it identifies the same row tomorrow. And, if it existed, yesterday. That is, you don't want to change the identity.
This is even more important when you have foreign key relationships -- that is, other tables that refer to this row. Those relationships could get all messed up if the ids start changing.
SQL does offer a simple way to get a number with no gaps:
select row_number() over (order by "index") as seqnum
from t;
I have a database with 2 tables: CurrentTickets & ClosedTickets. When a user creates a ticket via web application, a new row is created. When the user closes a ticket, the row from currenttickets is inserted into ClosedTickets and then deleted from CurrentTickets. If a user reopens a ticket, the same thing happens, only in reverse.
The catch is that one of the columns being copied back to CurrentTickets is the PK column (TicketID)that idendity is set to ON.
I know I can set the IDENTITY_INSERT to ON but as I understand it, this is generally frowned upon. I'm assuming that my database is a bit poorly designed. Is there a way for me to accomplish what I need without using IDENTITY_INSERT? How would I keep the TicketID column autoincremented without making it an identity column? I figure I could add another column RowID and make that the PK but I still want the TicketID column to autoincrement if possible but still not be considered an Idendity column.
This just seems like bad design with 2 tables. Why not just have a single tickets table that stores all tickets. Then add a column called IsClosed, which is false by default. Once a ticket is closed you simply update the value to true and you don't have to do any copying to and from other tables.
All of your code around this part of your application will be much simpler and easier to maintain with a single table for tickets.
Simple answer is DO NOT make an Identity column if you want your influence on the next Id generated in that column.
Also I think you have a really poor schema, Rather than having two tables just add another column in your CurrentTickets table, something like Open BIT and set its value to 1 by default and change the value to 0 when client closes the Ticket.
And you can Turn it On/Off as many time as client changes his mind, with having to go through all the trouble of Insert Identity and managing a whole separate table.
Update
Since now you have mentioned its SQL Server 2014, you have access to something called Sequence Object.
You define the object once and then every time you want a sequential number from it you just select next value from it, it is kind of hybrid of an Identity Column and having a simple INT column.
To achieve this in latest versions of SQL Server use OUTPUT clause (definition on MSDN).
OUTPUT clause used with a table variable:
declare #MyTableVar (...)
DELETE FROM dbo.CurrentTickets
OUTPUT DELETED.* INTO #MyTableVar
WHERE <...>;
INSERT INTO ClosedTicket
Select * from #MyTableVar
Second table should have ID column, but without IDENTITY property. It is enforced by the other table.
I am using Microsoft SQL Server and I have a master-detail scenario where I need to store the order of details. So in the Detail table I have ID, MasterID, Position and some other columns. There is also a unique index on MasterID and Position. It works OK except one case: when I have some existing details and I change their order. For example when I change a detail on position 3 with a detail on position 2. When I save the detail on position 2 (which in the database has Position equal to 3) SQL Server protests, because the index uniqueness constraint.
How to solve this problem in a reasonable way?
Thank you in advance
Lukasz Glaz
This is a classic problem and the answer is simple: if you want to move item 3 to position 2, you must first change the sort column of 2 to a temporary number (e.g. 99). So it goes like this:
Move 2 to 99
Move 3 to 2
Move 99 to 3
You must be careful, though, that your temporary value is never used in normal processing and that you respect multiple threads if applicable.
Update: BTW - one way to deal with the "multiple users may be changing the order" issue is to do what I do: give each user a numberical ID and then add this to the temporary number (my staff ID is actually the Unique Identity field ID from the staff table used to gate logins). So, for example, if your positions will never be negative, you might use -1000 - UserID as your temporary value. Trust me on one thing though: you do not want to just assume that you'll never have a collision. If you think that and one does occur, it'll be extremely hard to debug!
Update: GUZ points out that his users may have reordered an entire set of line items and submitted them as a batch - it isn't just a switch of two records. You can approach this in one of two ways, then.
First, you could change the existing sort fields of the entire set to a new set of non-colliding values (e.g. -100 - (staffID * maxSetSize) + existingOrderVal) and then go record-by-record and change each record to the new order value.
Or you could essentially treat it like a bubble sort on an array where the orderVal value is the equivalent of your array index. Either this makes perfect sense to you (and is obvious) or you should stick with solution 1 (which is easier in any event).
you could just remove the unique constraint (but leave an index key) on the order column, and ensure uniqueness in your code if necessary.
I have a table with a primary key as bigint (identity property is Yes and staring from 1 by 1). This table is in production and has been updated on daily bases: lots of deleting and inserting.
The problem is that this key is growing too big right now with 8 digits. I worry about overflow one day eventually.
Fortunately, this key is not used as foreign keys to any other tables. It is just used to identify a row in the table. Therefore I can safely reset the key values starting from 1 again, maybe once a year.
I could create a blank table and copy other field data there, then remove all the rows in the original table, reset the key/table and finally copy data back.
Not sure if there is if there is a build-in sp_xxx available in Microsoft SQL 2005 to do the job: just to reset primary key in sequence starting from 1 without affecting other column data? Or any other simple solution?
The maximum value for a bigint is 9,223,372,036,854,775,807. If you'd gotten to 8 digits in a day you'd still need 1011 days to hit the max. That's like 25 million years.
Assuming you still want to reset the column, the first question I have is: is the ordering of rows important? Meaning do you rely upon the fact that row 1000 comes before 1100 for, say, chronological or otherwise absolute ordering? If not, it's easy: delete the column an add it again. Hey presto, new values.
If you need to maintain the order you'll need to do it a little more carefully:
Lock the table;
Change the type so it's no longer auto increment;
Create a new column. You're best off making it have no indexes for now as updating the index will slow does the inserts;
Populate the values in the second with a loop of some kind incrementing a counter (like the SQL Server rownum trick) ordering the inserts to match the original order;
Replace the old column with the new one;
Reset auto-increment and primary key status.
make a new table with a different name, but exactly the same columns. do a insert into new_table select from old_table. then drop the old table and rename the new table.
If you're using a BIGINT, you're not even close to overflowing it. If you're only at 10,000,000 after a year, you could go for a million years and still be fine.