Stop saving data to primary key column - sql

I have a table, in which i have two columns: regular id(integer i mean) and uuid. in practice im using uuid for everything however there is one problem: the data i have in my db comes from outside INCLUDING the original integer ids.
My question is how can i stop saving to the integer id column?
Its not possible to make primary key accept blanks(?).
Its not possible to make primary key accept repeated values(?).
I cannot just delete the primary key column(?)
So bottom line i need to allow my table to save information without original integer primary key. is that possible somehow?
(im using postgres)

Related

How to solve the ORA-01758 problem without delete data and add primary key

I want Write ALTER TABLE SQL statement to add a column to the table. The column is classified as NUMBER datatype, NOT NULL attribute, and primary key.
But it shows ORA-01758.
ALTER TABLE INSURANCE
ADD (INS_ID NUMBER PRIMARY KEY NOT NULL);
If I select DEFAULT 0, it really solves the problem, but I cannot set up a primary key and INS_ID shows 0, not (null)
Because this table's data is from a excel document, what should I solve it without delete data?
If I must delete data how restore it easily?
Typically you can either:
provide a default value so oracle can fill the column as it creates, satisfying the constraint or
create the column as nullable, fill it with relevant data, then enable the not null restriction/make it the primary key after it has data or
empty the table
1 is not an option for you, because the values will have to be unique if they are to be a primary key. You could consider associating the column with a sequence or making it an identity column though
2 is a likely option for you if an auto generated incrementing number is no good as a PK (for example the key data is already known or calculated)
3 is something you've already said is not an option
Give some thought to the ongoing maintenance requirements - every front end app that writes data into this table will need to be upgraded to understand it has a primary key unless you're using a sequence/identity or similar that provides a unique value for the row. If there will be a lot to update and you dont care to have a PK in a particular form or from some existing value/relationship elsewhere, having an auto number PK can be helpful. If this data needs to relate to existing data that has a key, you need to upgrade front end apps so they can respect the new PK

Access Primary Key Basic Understanding

This is a bit of a gneral question, but I can't seem to find an explicit answer.
Let's say I have an import table, with a primary key added by Access. Then another table that I use to cleanup, etc. then connect to Excel.
If I delete a record from my first table (let's say, primary key 123). And then I append a bunch of records to it, will primary key 123 be used, or will Access "know" that that key was used previously and subsequently deleted? Will it fill a record using that key because it's available?
a primary key added by Access
This is always an AutoNumber column.
It will not re-use a previously deleted ID.
Exception: if you delete the last appended record(s), and then compact&repair the database, the AutoNumber "seed" will be reset to (current maximum ID) + 1.
But gaps in the IDs will never be filled.

Setting up foreign key with different datatype

If I create two tables and I want to set one column as foreign key to another table column why the hell am I allowed to set foreign key column datatype?
It just doesn't make any sense or am I missing something? Is there any scenario where column with foreign keys has different datatype on purpose?
Little more deeper about my concerns, I tried to use pgadmin to build some simple Postgres DB. I made first table with primary key serial datatype. Then I tried to make foreign key but what datatype? I have seen somewhere serial is bigint unsigned. But this option doesn't even exists in pgadmin. Of course I could use sql but then why am I using gui? So I tried Navicat instead, same problem. I feel like with every choice I do another mistake in my DB design...
EDIT:
Perhaps I asked the question wrong way.
I was allowed to do build structure:
CREATE TABLE user
(
id bigint NOT NULL,
CONSTRAINT user_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
CREATE TABLE book
(
user integer,
CONSTRAINT dependent_user_fkey FOREIGN KEY (user)
REFERENCES user (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
OIDS=FALSE
);
I insert some data to table user:
INSERT INTO user(id)
VALUES (5000000000);
But I can't cast following insert:
INSERT INTO book(user)
VALUES (5000000000);
with ERROR: integer out of range which is understandable, but obvious design error.
And my question is: Why when we set CONSTRAINT, data types are not being validated. If I'm wrong, answer should contain scenario where it is useful to have different data types.
Actually it does make sense here is why:
In a table, you can in fact set any column as its primary key. So it could be integer, double, string, etc. Even though nowadays, we mostly use either integers or, more recently, strings as primary key in a table.
Since the foreign key is pointing to another table's primary key, this is why you need to specify the foreign key's datatype. And it obviously needs to be the same datatype.
EDIT:
SQL implementations are lax on this case as we can see: they do allow compatible types (INT and BIG INT, Float or DECIMAL and DOUBLE) but at your own risk. Just as we can see in your example, below.
However, SQL norms do specify that both datatypes must be the same.
If datatype is character, they must have the same length, otherwise, if it is integer, they must have the same size and must both be signed or both unsigned.
You can see by yourself over here, a chapter from a MySQL book published in 2003.
Hope this answers your question.
To answer your question of why you'd ever want different type for a foreign vs. primary key...here is one scenario:
I'm in a situation where an extremely large postgres table is running out of integer values for its id sequence. Lots of other, equally large tables have a foreign key to that parent table.
We are upsizing the ID from integer to bigint, both in the parent table and all the child tables. This requires a full table rewrite. Due to the size of the tables and our uptime commitments and maintenance window size, we cannot rewrite all these tables in one window. We have about three months before it blows up.
So between maintenance windows, we will have primary keys and foreign keys with the same numeric value, but different size columns. This works just fine in our experience.
Even outside an active migration strategy like this, I could see creating a new child table with a bigint foreign key, with the anticipation that "someday" the parent table will get its primary key upsized from integer to bigint.
I don't know if there is any performance penalty with mismatched column sizes. That question is actually what brought me to this page, as I've been unable to find guidance on it online.
(Tangent: Never create any table with an integer id. Go with bigint, no matter what you think your data will look like in ten years. You're welcome.)

How can I replace the existing primary key with a new primary key on my table?

I'm working with a legacy SQL Server database which has a core table with a bad primary key.
The key is of type NVARCHAR(50) and contains an application-generated string based on various things in the table. For obvious reasons, I'd like to replace this key with an auto-incrementing (identity) INT column.
This is a huge database and we're upgrading it piece-by-piece. We want to minimize the changes to tables that other components write to. I figured I could change the table without breaking anything by just:
Adding the new Id column to the table and making it nullable
Filling it with unique integers and making it NOT NULL
Dropping the existing primary key while ensuring there's a uniqueness constraint still on that column
Setting the new Id column to be the new primary key and identity
Item 3 is proving very painful. Because this is a core table, there are a lot of other tables with foreign key constraints on it. To drop the existing primary key, it seems I have to delete all these foreign key constraints and create them again afterwards.
Is there an easier way to do this or will I just have to script everything?
Afraid that is the bad news. We just got through a big project of doing the same type of thing, although our head DBA had a few tricks up his sleeve. You might look at something like this to get your scripts generated for the flipping of the switch:
I once did the same thing and basically used the process you describe. Except of course you have to first visit each other table and add new foreign key pointing to the new column in your base table
So the approach I used was
Add a new column with an auto incrementing integer in the base table, ensure it has a unique index on it (to be replaced later by the primary key)
For each foreign key relationship pointing to the base table add a new column in the child table. (note this can result in adding more than one column in the child table if more than one relationship)
For each instance of a key in the child table enter a value into the new foreign key field(s)
Replace your foreign key relationships such that the new column now serves
Make the new column in the base table the primary
Drop the old primary key in the base table and each old foreign key in the
children.
It is doable and not as hard as it might sound at first. The crux is a series of update statements for the children table of the nature
Update child_table
set new_column = (select new_primary from base)
where old_primary = old_foreign

Appending Rows into an SQLite Database Where Primary Key May Already Exist

I’m trying to merge a few pairs of SQLite3 databases that have the same tables (and schemas). Some of the tables are pretty simple and just have rows of plain data, but some of the tables have primary keys. Some of the keys are unique like a URL (eg url LONGVARCHAR PRIMARY KEY), and some of them are just simple integer indexes, but NOT set to auto-increment (eg id INTEGER PRIMARY KEY).
I’ve found several topics on merging databases (and I had already manually merged one pair of non-primary-key databases without effort), but am concerned about the ones with keys which may already exist in both.
My question is what happens if a row is inserted to a database where a row with the same key already exists? It should overwrite the row that has that key right? I was hoping that it would append them to the table and update the key, but that only works if the key has a numeric component that is set to auto-increment correct?
Can anyone confirm my suppositions—and if possible, offer a suggestion on the easiest way to append such rows?
Thanks a lot.
You should have no problems if you set the primary key in the destination table to auto increment.
Therefore, when you do you bulk insert command or whatever you are using to insert values into your new table, you simply do not supply input for your primary key field and there will NEVER be a duplicate.
Columns:
ID Name
Just don't provide ID field, ie/
INSERT INTO tableName ("Synetech")
The insert would just add this with the next available ID index in the table.
Good Luck!
If you try to INSERT a duplicate primary key, it will give you an error and not allow the insert. SQLite also supports the 'REPLACE INTO' syntax, which will update on a duplicate primary key.
If you want to append on duplicates, you will have to check whether a field with that key already exists, and if so then change the key to some new value. The correct way to do this likely depends on your application. For integer keys you could just take the max+1, but for the url keys it's not clear what the correct behavior should be.