Alternateive SQL Query for a Badly-Designed Oracle Table - sql

I am pulling data from a legacy table (that I did not design) to convert that data for use in a different application. Here is the truncated table design:
-- Create table
create table order
(
insert_timestamp TIMESTAMP(6) default systimestamp not null,
numeric_identity NUMBER not null,
my_data VARCHAR2(100) not null
)
-- Create/Recreate primary, unique and foreign key constraints
alter table order
add constraint order_pk primary key (numeric_identity, insert_timestamp);
The idea behind this original structure was that the numeric_identity identified a particular customer. The most current order would be the one with the newest insert timestamp value for the given customer's numeric identity. In this particular case, there are no instances where more than one row has the same insert_timestamp value and numeric_identity value.
I'm tasked with retrieving this legacy data for conversion. I wrote the following query to pull back the latest, unique records, as older records need not be converted:
select * from order t where t.insert_timestamp =
(select max(w.insert_timestamp) from order
where t.numeric_identity = w.numeric_identity);
This pulls back the expected dataset, but could fail if there somehow were more than one row with the same insert_timestamp and numeric_identity. Is there a better query than what I've written to pull back unique records in a table designed in this fashion?

Another way to write this query:
select *
from (select t.*, row_number() over (partition by numeric_identity order by insert_timestamp desc) rn
from order t)
where rn = 1
Also, you can't get situation when one row has the same insert_timestamp and numeric_identity, because you have primary key on these two columns.

Related

How to alter PostgreSQL column with entries to be a nextval id

I have a problem with a really big database with following scheme:
id | date | other columns...
The id column is from type integer. It would be ideal if it where from type integer with a nextval constraint. Many of the id entries have unique id's incremented when they where added.
The problem is all rows added since a specific date have no id and the value is null.
Is it possible to add such constraints to tables with existing values (plus null values) so that the null values are filled with integer id's?
And is this possible without losing the old id's and in the best case with ascending order in relation to the date column?
thanks and greetings
You need to first update the existing rows with a unique, non-null value:
update the_table
set id = new_id
from (
select ctid,
(select max(id) from the_table) + row_number() over (order by date) as new_id
from the_table
where id is null
) t
where t.ctid = the_table.ctid;
I am not sure if the order of the IDs is guaranteed using this approach, but it's likely that it does.
Now, that the column doesn't contain any NULL values, we can either change it automatically assign new values.
The next steps depend on whether you want to make this an identity column or simply a column with a default from a sequence (essentially a (discouraged) serial column)
Staying with a "serial"
We need to create a sequence and sync it with the highest value in the column.
create sequence the_table_id_seq;
select setval('the_table_id_seq', max(id))
from the_table;
Then use this for the default and link the sequence to the column.
alter table the_table
alter id set not null,
alter id set default nextval('the_table_id_seq') ;
alter sequence the_table_id_seq owned by the_table.id;
Using an identity column (recommended)
To make this a proper (recommended) identity column (Postgres 10 and later) you can do it like this:
alter table the_table
alter id set not null,
alter id add generated always as identity;
Now adding the identity attribute created a new sequence which we need to sync with the existing values in the column:
select setval(pg_get_serial_sequence('the_table', 'id'), max(id))
from the_table;
Alternatively, you could have manually looked up the current max value and provide that directly when specifying the identity default:
alter table the_table
alter id set not null,
alter id add generated always as identity (start with 42);

Combine the Data of two SQL databases to one

I have two SQL databases with the same schema. Both have different Data but are using the same primary keys. I want to add the Data from one database to the second one, but all solution I found just update the rows with the same primary keys and dont attach them at the end.
Anyone has a solution?
If your primary key column is an ID field that uses an auto-incrementing sequence (if not, then how would you choose a different primary key for overlapping records anyway?), you should just be able to insert the records from the first schema, excluding the ID in your select query.
For example:
Table Schema:
ID INT(10)
Name VARCHAR(20)
Desc TEXT
SQL
INSERT INTO schema2.table (Name, Desc)
SELECT Name, Desc
FROM schema1.table

Serial numbers per group of rows for compound key

I am trying to maintain an address history table:
CREATE TABLE address_history (
person_id int,
sequence int,
timestamp datetime default current_timestamp,
address text,
original_address text,
previous_address text,
PRIMARY KEY(person_id, sequence),
FOREIGN KEY(person_id) REFERENCES people.id
);
I'm wondering if there's an easy way to autonumber/constrain sequence in address_history to automatically count up from 1 for each person_id.
In other words, the first row with person_id = 1 would get sequence = 1; the second row with person_id = 1 would get sequence = 2. The first row with person_id = 2, would get sequence = 1 again. Etc.
Also, is there a better / built-in way to maintain a history like this?
Don't. It has been tried many times and it's a pain.
Use a plain serial or IDENTITY column:
Auto increment table column
CREATE TABLE address_history (
address_history_id serial PRIMARY KEY
, person_id int NOT NULL REFERENCES people(id)
, created_at timestamp NOT NULL DEFAULT current_timestamp
, previous_address text
);
Use the window function row_number() to get serial numbers without gaps per person_id. You could persist a VIEW that you can use as drop-in replacement for your table in queries to have those numbers ready:
CREATE VIEW address_history_nr AS
SELECT *, row_number() OVER (PARTITION BY person_id
ORDER BY address_history_id) AS adr_nr
FROM address_history;
See:
Gap-less sequence where multiple transactions with multiple tables are involved
Or you might want to ORDER BY something else. Maybe created_at? Better created_at, address_history_id to break possible ties. Related answer:
Column with alternate serials
Also, the data type you are looking for is timestamp or timestamptz, not datetime in Postgres:
Ignoring time zones altogether in Rails and PostgreSQL
And you only need to store previous_address (or more details), not address, nor original_address. Both would be redundant in a sane data model.

SQL Server: how to add new identity column and populate column with ids?

I have a table with huge amount of data. I'd like to add extra column id and use it as a primary key. What is the better way to fill this column with values from one 1 to row count
Currently I'm using cursor and updating rows one by one. It takes hours. Is there a way to do that quicker?
Thank you
Just do it like this:
ALTER TABLE dbo.YourTable
ADD ID INT IDENTITY(1,1)
and the column will be created and automatically populated with the integer values (as Aaron Bertrand points out in his comment - you don't have any control over which row gets what value - SQL Server handles that on its own and you cannot influence it. But all rows will get a valid int value - there won't be any NULL or duplicate values).
Next, set it as primary key:
ALTER TABLE dbo.YourTable
ADD CONSTRAINT PK_YourTable PRIMARY KEY(ID)
If you want to add row numbers in a specific order you can do ROW_NUMBER() into a new table then drop the original one. However, depending on table size and other business constraints, you might not want to do that. This also implies that there is a logic according to which you will want the table sorted.
SELECT ROW_NUMBER() OVER (ORDER BY COL1, COL2, COL3, ETC.) AS ID, *
INTO NEW_TABLE
FROM ORIGINAL_TABLE

Combining two tables in sqlite3

I have two tables in two separate sqlite3 databases. The datatypes are identical, but the schemas slightly different. I want them to be a single table in a single database with the same schema as Table 2
Table 1
CREATE TABLE temp_entries (
id INTEGER PRIMARY KEY,
sensor NUMERIC,
temp NUMERIC,
date NUMERIC);
Table 2
CREATE TABLE "restInterface_temp_entry" (
"id" integer NOT NULL PRIMARY KEY,
"dateTime" integer NOT NULL,
"sensor" integer NOT NULL,
"temp" integer NOT NULL
);
id is not unique between the two tables. I would like to create another table with the same schema as Table 2. I would like the id for the entries in Table 1 to start from 0 and then the entries from table 2 to start after the last entry from table 1.
Ideally I would like to just add the entries from Table 1 to Table 2 and "reindex" the primary key so that it was in the same ascending order that "dateTime" is.
UPDATE: I now have both tables using the same schema, I did this by creating a new table with the same schema as Table 2 into the database that held Table 1. I than copied the data to the new table with something like:
INSERT INTO restInterface_temp_entry(id,dateTime,sensor,temp)
...> select id,date,sensor,temp FROM temp_entries;
Background
I used to record a bunch of temp_entries to a csv file. I wanted to put the data into a format that was easier to work with and chose sqlite3. I wrote a program that pulled all of the entries out and put them into Table 1. I wasn't sure what I was doing at the time, and used Table 2 for all new entries. Now I want to combine them all, hopefully keeping id and date in ascending order.
Figured it out.
Open current database.
Attach to original database
ATTACH '/orig/db/location' as orig
Move records from current database to old database, leaving out the PK
insert into orig.restInterface_temp_entry(dateTime,sensor,temp)
...> select dateTime,sensor,temp from main.restInterface_temp_entry;
Clear current databases table
delete from main.restInterface_temp_entry where id > 0
Copy everything updated records from original databases table back to current.
insert into main.restInterface_temp_entry(id,dateTime,sensor,temp)
...> select id,dateTime,sensor,temp
...> from orig.restInterface_temp_entry;
I'm assuming SQLLite supports INSERT INTO SELECT
INSERT INTO newtable (id, datetime, sensor, temp)
SELECT id, date, sensor, temp
FROM temp_entries
ORDER BY id;
INSERT INTO newtable (id, datetime, sensor, temp)
SELECT "id", "dateTime", "sensor", "temp"
FROM "restInterface_temp_entry"
ORDER BY "id";
This should do the trick.