Inserting a value into a primary key field in Access - sql

I have several tables that I need to contain a "Null" value, so that if another table links to that particular record, it basically gets "Nothing". All of these tables have differing numbers of records - if I stick something on the end, it gets messy trying to find the "Null" record. So I would want to perform an INSERT query to append a record that either has the value 0 for the ID field, or a fixed number like 9999999. I've tried both, and Access doesn't let me run the query because of a key violation.
The thing is, I've run the same query before, and it's worked fine. But then I had to delete all the data and re-upload it, and now that I'm trying it again, it's not working. Here is the query:
INSERT INTO [Reading] ([ReadingID], [EntryFK], [Reading], [NotTrueReading])
VALUES (9999999, 0, "", FALSE)
Where 9999999 is, I've also tried 0. Both queries fail because of key violations.
I know that this isn't good db design. Nonetheless, is there any way to make this work? I'm not sure why I can't do it now whereas I could do it before.

I'm not sure if I'm fully understanding the issue here, but there may be a couple of reasons why this isn't working. The biggest thing is that any sort of primary key column has to be unique for every record in your lookup table. Like you mentioned above, 0 is a pretty common value for 'unknown' so I think you're on the right track.
Does 0 or 9999999 already exist in [Reading]? If so, that could be one explanation. When you wiped the table out before, did you completely drop and recreate the table or just truncate it? Depending on how the table was set up, some databases will 'remember' all of the keys it used in the past if you simply deleted all of the data in that table and re-inserted it rather thank dropping and recreating it (that is, if I had 100 records in a table and then truncated it (or deleted those records), the next time I insert a record into that table it'll still start at 101 as its default PK value).
One thing you could do is to drop and recreate the table and set it up so that the primary key is generated by the database itself if it isn't already (aka an 'identity' type of column) and ensure that it starts at 0. Once you do that, the first record you will want to insert is your 'unknown' value (0) like so where you let the database itself handle what the ReadingID will be:
INSERT INTO [Reading] ([EntryFK], [Reading], [NotTrueReading]) VALUES (0, "", FALSE)
Then insert the rest of your data. If the other table looking up to [Reading] has a null value in the FK column, then you can always join back to [Reading] on coalesce(fk_ReadingID,0) = Reading.ReadingID.
Hope that helps in some capacity.

Related

SQL SSMS: changed 'allow nulls' design, now can't switch back (`cannot insert the value NULL into column`)

I'm working in a dev SQL database within SSMP.
We've got a junction table, jnc_roles_users, which pulls values from two lookup tables: lu_roles and lu_users.
Initially, all but one column in the junction table had the Allow Nulls checkbox checked. I then went into the design editor and checked all the boxes - got the alert that this will affect the two lookup tables, and saved the changes successfully.
Now, I want to switch back to the original table design of nothing allowing nulls except one column. But SSMS is no longer allowing me to uncheck the Allow Nulls box for one of the fields. Ironically, no problem unchecking our 'ID' primary key. It's our 'pseudo' primary key 'roleUserID' that I can't change back. When I try to uncheck 'Allow Nulls' for it and save, I get the alert:
'jnc_roles_users' table
- Unable to modify table.
Cannot insert the value NULL into column 'roleUserID', table 'jnc_roles_users'; column does not allow nulls. INSERT fails.
The statement has been terminated.
This seems counterintuitive because the column is currently stuck at 'Allow Nulls', yet this popup is telling me it can't accept nulls.
lu_roles and lu_users was saved successfully, but not jnc_roles_users which is the only table which contains 'roleUserID'.
The roleUserID column isn't technically a primary key - but we're considering it 'like' one as it does take unique values that we set. For certain reasons we're considering this column as a sort of primary key - something to do w/ database backups in different locations and how the regular ID pk might get duplicated incorrectly in different environments, so we needed a second pseudo pk of roleUserID.
Any ideas?
It's most likely that some new data with a NULL value was added to the table between the time you changed Allow NULL in the designer and the time you went to change it back.
You can try using ALTER TABLE -
ALTER TABLE jnc_role_users
ALTER COLUMN roleUserID NVARCHAR(50) NOT NULL
Change NVARCHAR(50) to the data type the roleUserID column uses

Is id column position in Postgresql important?

I was testing a migration that deletes a primary key column id (I wanted to use a foreign key as primary key). When I ran and reverted the migration, I saw that the state of my table is the same, except that id column is now last one.
Will it change the behaviour of my database in any way and should I bother to restore the column order in the migration revert code?
In theory everything should be fine, but there are always scenarios when your code could fail.
For example:
a) blind insert:
INSERT INTO tab_name
VALUES (1, 'b', 'c');
A blind insert is when an INSERT query doesn’t specify which columns receive the inserted data.
Why is this a bad thing?
Because the database schema may change. Columns may be moved, renamed,
added, or deleted. And when they are, one of at least three things can
happen:
The query fails. This is the best-case scenario. Someone deleted a column from the target table, and now there aren’t enough columns for
the insert to go into, or someone changed a data type and the inserted
type isn’t compatible, or so on. But at least your data isn’t getting
corrupted, and you may even know the problem exists because of an
error message.
The query continues to work, and nothing is wrong. This is a middle-worst-case scenario. Your data isn’t corrupt, but the monster
is still hiding under the bed.
The query continues to work, but now some data is being inserted somewhere it doesn’t belong. Your data is getting corrupted.
b) ORDER BY oridinal
SELECT *
FROM tab
ORDER BY 1;

Insert & Delete from SQL best practice

I have a database with 2 tables: CurrentTickets & ClosedTickets. When a user creates a ticket via web application, a new row is created. When the user closes a ticket, the row from currenttickets is inserted into ClosedTickets and then deleted from CurrentTickets. If a user reopens a ticket, the same thing happens, only in reverse.
The catch is that one of the columns being copied back to CurrentTickets is the PK column (TicketID)that idendity is set to ON.
I know I can set the IDENTITY_INSERT to ON but as I understand it, this is generally frowned upon. I'm assuming that my database is a bit poorly designed. Is there a way for me to accomplish what I need without using IDENTITY_INSERT? How would I keep the TicketID column autoincremented without making it an identity column? I figure I could add another column RowID and make that the PK but I still want the TicketID column to autoincrement if possible but still not be considered an Idendity column.
This just seems like bad design with 2 tables. Why not just have a single tickets table that stores all tickets. Then add a column called IsClosed, which is false by default. Once a ticket is closed you simply update the value to true and you don't have to do any copying to and from other tables.
All of your code around this part of your application will be much simpler and easier to maintain with a single table for tickets.
Simple answer is DO NOT make an Identity column if you want your influence on the next Id generated in that column.
Also I think you have a really poor schema, Rather than having two tables just add another column in your CurrentTickets table, something like Open BIT and set its value to 1 by default and change the value to 0 when client closes the Ticket.
And you can Turn it On/Off as many time as client changes his mind, with having to go through all the trouble of Insert Identity and managing a whole separate table.
Update
Since now you have mentioned its SQL Server 2014, you have access to something called Sequence Object.
You define the object once and then every time you want a sequential number from it you just select next value from it, it is kind of hybrid of an Identity Column and having a simple INT column.
To achieve this in latest versions of SQL Server use OUTPUT clause (definition on MSDN).
OUTPUT clause used with a table variable:
declare #MyTableVar (...)
DELETE FROM dbo.CurrentTickets
OUTPUT DELETED.* INTO #MyTableVar
WHERE <...>;
INSERT INTO ClosedTicket
Select * from #MyTableVar
Second table should have ID column, but without IDENTITY property. It is enforced by the other table.

rebuild/refresh my table's PK list - gap in numbers

I have finished all my changes to a database table in sql server management studio 2012, but now I have a large gap between some values due to editing. Is there a way to keep my data, but re-assign all the ID's from 1 up to my last value?
I would like this cleaned up as I populate dropdownlists with these values and then I make interactions with my database with the assumption that my dropdownlist index and the table's ID match up, which is not the case right now.
My current DB has a large gap from 7 to 28, I would like to shift everything from 28 and up, back down to 8, 9, 10, 11, ect... so that my database has NO gaps from 1 and onward.
If the solution is tricky please give me some steps as I am new to SQL.
Thank you!
Yes, there are any number of ways to "close the gaps" in an auto generated sequence. You say you're new to SQL so I'll assume you're also new to relational concepts. Here is my advice to you: don't do it.
The ID field is a surrogate key. There are several aspects of surrogates one must be mindful of when using them, but the one I want to impress upon you is,
-- A surrogate key is used to make the row unique. Other than the guarantee that
-- the value is unique, no other assumptions may be made concerning the value.
-- In particular, no meaning may be derived from the value as to the contents of
-- the row or the row's relationship to any other row.
You have designed your app with a built-in assumption of the value of the key field (that they will be consecutive). Already it is causing you problems. Do you really want to go through this every time you make changes to the table? And suppose a future feature requires you to filter out some of the choices according to an option the user has selected? Or enable the user to specify the order of the items? Not going to be easy. So what is the solution?
You can create an additional (non-visible) field in the dropdown list that contains the key value. When the user makes a selection, use that index to get the key value of the selection and then go out to the database and get whatever additional data you need. This will work if you populate the list from the entire table or just select a few according to some as yet unknown filtering criteria or change the order in any way.
Viola. You never have this problem again, no matter how often you add and remove rows in the table.
However, on the off chance that you are as stubborn as me (not likely!) or just refuse to listen to the melodious voice of reason and experience, then try this:
Create a new table exactly like the old table, including auto incrementing PK.
Populate the new table using a Select from the old table. You can specify any order you want.
Drop the old table.
Rename the new table to the old table name.
You will have to drop and redefine any FKs from other tables. But this entire process
can be placed in a script because if you do this once, you'll probably do it again.
Now all the values are consecutive. Until you edit the table again...
You should refactor the code for your dropdown list and not the PK of the table.
If you do not agree, you can do one of the following:
Insert another column holding the dropdown's "order of appearance", make a unique index on it and fill this by hand (or programmatically).
Replace the SERIAL with an INT would work, make a unique index on the column and fill this by hand (or programmatically).
Remove the large ids and reseed your serial - the code depending on your DBMS
This happens to me all the time. If you don't have any foreign key constraints then it should be an easy fix.
Remember a DELETE statement will remove the record but keep the identity seed the same. (If I remove id # 5 and #5 was the last record inserted then SQL server still stores the identity seed value at "6").
TRUNCATING the table will reset the identity seed back to it's original value.
INSERT_IDENTITY [TABLE] ON can also be used to insert the correct data in the correct order if tuncating cannot happen.
SELECT *
INTO #tempTable
FROM [TableTryingToFix]
TRUNCATE TABLE [TableTryingToFix];
INSERT INTO [TableTryingToFix] (COL1, COL2, COL3, ETC)
SELECT COL1, COL2, COL2, ETC
FROM #tempTable
ORDER BY oldTableID

SQL Server : Attempting to Insert a Duplicate Record Costs an Id

I have the following table set up:
Id int pk, unique not null
Name varchar(50) not null
Other columns not relevant to this issue
With an index set up on Name to be unique and non-clustered.
The setup does EXACTLY what I want - that is, only insert new rows whose Name doesn't already exist in the table, and throw an error if the new row is a duplicate Name.
I might be nit-picky about it, but every attempt to add a duplicate will cause SQL Server to skip the next Id that would have been assigned, had the new row been a non-duplicate Name.
Is there a way to prevent this with some setting, without the need to query for existence first before deciding to insert or deny?
No, there is no setting to prevent the identity value from incrementing on a failed insert.
Like you suggest, you can mitigate this by checking for a duplicate before performing the insert - I would do this not just to keep the identity from incrementing, but also to keep your Sql Server from raising errors as a standard procedure.
However, there may be other exceptional circumstances that would cause an insert to fail... so if gaps in the Ids pose more than an aesthetic problem, an identity column might not be the best solution for what you're trying to solve.