ALTER TABLE migrations does not update the generated code - kotlin

I'm writing a migration file which alters a table, adding a new column, but the generated code isn't updated, so I can't insert new records into the table with value to the new column.
Example:
// BankAccount.sq file
CREATE TABLE bank_account (
id INTEGER PRIMARY KEY,
bank_code TEXT NOT NULL,
account_number INTEGER NOT NULL,
account_digit INTEGER NOT NULL
);
selectALL:
SELECT * FROM bank_account;
insert:
INSERT INTO bank_account VALUES (?,?,?,?);
// 1.sqm file
ALTER TABLE bank_account ADD COLUMN bank_name TEXT;
It seems the generated code is not updated after that column is added. For instance, the insert function isn't updated to receive the new added column to the table, and the internal generated Kotlin code isn't updated with the new column.
Is there any way to bypass this issue?

the CREATE TABLE is always the fresh version of your schema, so you need to add the column in the CREATE TABLE as well. If you want to have migration files be the source of truth you need to enable deriveSchemaFromMigrations as outlined here: https://cashapp.github.io/sqldelight/jvm_mysql/#migration-schema

Related

postgresql option to both insert new row despite "conflict" or update existing row

If I have a postgresql table like:
CREATE TABLE IF NOT EXISTS public.time_series
(id serial PRIMARY KEY,
variable_id integer NOT NULL,
datetime timestamp with time zone NOT NULL,
value real NOT NULL,
edit_datetime timestamp with time zone NOT NULL DEFAULT now())
Im not having any constrains here except the primary key. For now..
And i want to have the option to either UPDATE OR INSERT new data where it is essentially the same variable for the same datetime. So that I can choose to either overwrite or just have a new version.. How do a proceede to do that?
Without constrains and if I want to INSERT some data; I do some WHERE NOT EXISTS where i make sure that the new data does not correlate with a row that has the same variable_id, datetime, and value. And it works just fine.
Now if the case is that I want to UPDATE if it exists and else INSERT I make a CONSTRAINT like:
ALTER TABLE public.time_series
ADD CONSTRAINT unique_comp UNIQUE (variable_id, datetime)
And updates the row if there is a conflict and INSERT if there is non.
And it works just fine..
But, again, what if I for some variables wants pervious "versions" where i can see them based on edit_datetime and want some other variables to be overwritten with the new data? Currently one way rules out the other one.
BR

Postgres GENERATED AS IDENTITY column nullability

I want to create a table, which contains a nullable column having GENERATED BY DEFAULT AS IDENTITY option, therefore I run the following query:
CREATE TABLE my_table (
generated INTEGER NULL GENERATED BY DEFAULT AS IDENTITY,
data TEXT NOT NULL
);
But once I try to insert a row in the table, which generated field is null like this:
INSERT INTO my_table(generated, data) VALUES(NULL, "some data");
I get a null-constraint violation error.
However if I change the order of my_table.generated column properties:
CREATE TABLE my_table (
generated INTEGER GENERATED BY DEFAULT AS IDENTITY NULL,
data TEXT NOT NULL
);
It inserts rows, which generated field is NULL, without any issues.
Is this the expected behavior for the case?
Postgres developers told me this is a bug since identity columns weren't supposed to be nullable (see the patch file under the response).

Delete and Copy Big Table with Autoincrement

I want to delete many rows (More than a million) from a big table.
My table is like this:
Create table MY_TABLE (
MY_ID NUMBER GENERATED BY DEFAULT AS IDENTITY (Start with 1) primary key,
PROCESS NUMBER,
INFORMATION VARCHAR2(100)
);
Instead of using "delete from MY_TABLE where PROCESS = 3"
I do:
CREATE TABLE BCK_MY_TABLE AS (SELECT * FROM MY_TABLE WHERE PROCESS <> 3);
DROP TABLE MY_TABLE;
RENAME BCK_MY_TABLE to MY_TABLE;
Problem is: When i create another table (BCK_MY_TABLE) i lose the autoincrement on the column MY_ID. What can i do?
There isn't a straightforward way to do this with 'create table as select' (CTAS), because my_id in the new table won't be an identity column, and you can't make existing columns into identity columns.
One way would be to create the table explicitly with an identity column, copy the data and reset the identity value:
create table bck_my_table
( my_id number generated by default as identity primary key
, process number
, information varchar2(100) );
insert into bck_my_table (my_id, process, information)
select my_id, process, information from my_table;
alter table bck_my_table
modify my_id generated always as identity start with limit value;
(We have to use generated by default so the column is updatable, then change it to generated always to prevent further changes.)
Another way would be to copy the table using CTAS then add a new identity column, update it from the old my_id, reset it using start with limit value, drop the old column and rename the new one.

Restoring a Truncated Table from a Backup

I am restoring the data of a truncated table in an Oracle Database from an exported csv file. However, I find that the primary key auto-increments and does not insert the actual values of the primary key constrained column from the backed up file.
I intend to do the following:
1. drop the primary key
2. import the table data
3. add primary key constraints on the required column
Is this a good approach? If not, what is recommended? Thanks.
EDIT: After more investigation, I observed there's a trigger to generate nextval on a sequence to be inserted into the primary key column. This is the source of the predicament. Hence, following the procedure above would not solve the problem. It lies in the trigger (and/or sequence) on the table. This is solved!
easier to use your .csv as an external table and then go
create table your_table_temp as select * from external table
examine the data in the new temp table to ensure you know what range of primary keys is present
do a merge into the new table
samples from here and here
CREATE TABLE countries_ext (
country_code VARCHAR2(5),
country_name VARCHAR2(50),
country_language VARCHAR2(50)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ext_tab_data
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
country_code CHAR(5),
country_name CHAR(50),
country_language CHAR(50)
)
)
LOCATION ('Countries1.txt','Countries2.txt')
)
PARALLEL 5
REJECT LIMIT UNLIMITED;
and the merge
MERGE INTO employees e
USING hr_records h
ON (e.id = h.emp_id)
WHEN MATCHED THEN
UPDATE SET e.address = h.address
WHEN NOT MATCHED THEN
INSERT (id, address)
VALUES (h.emp_id, h.address);
Edit: after you have merged the data you can drop the temp table and the result is your previous table with the old data and the new data together
Edit you mention " During imports, the primary key column does not insert from the file, but auto-increments". This can only happen when there is a trigger on the table, likely, Before insert on each row. Disable the trigger and then do your import. Re-enable the trigger after committing your inserts.
I used the following procedure to solve it:
drop trigger trigger_name
Imported the table data into target table
drop sequence sequence_name
CREATE SEQUENCE SEQ_NAME INCREMENT BY 1 START WITH start_index_for_next_val MAXVALUE max_val MINVALUE 1 NOCYCLECACHE 20 NOORDER
CREATE OR REPLACE TRIGGER "schema_name"."trigger_name"
before insert on target_table
for each row
begin
select seq_name.nextval
into :new.unique_column_name
from dual;
end;

Derby DB modify 'GENERATED' expression on column

I'm attempting to alter a Derby database that has a table like this:
CREATE TABLE sec_merch_categories (
category_id int NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
packageName VARCHAR(100) NOT NULL,
name VARCHAR(100) NOT NULL,
primary key(category_id)
);
I'd like to change the category_id column to be:
category_id int NOT NULL GENERATED BY DEFAULT AS IDENTITY (START WITH 1, INCREMENT BY 1),
The only place I've seen that documents this is IBM's DB2, which advises dropping the expression, changing the integrity of the table, and adding the expression back. Is there anyway of doing this in Derby?
Thanks,
Andrew
You can create a new table with the schema you want (but a different name), then use INSERT INTO ... SELECT FROM ... to copy the data from the old table to the new table (or unload the old table and reload it into the new table using the copy-data system procedures), then use RENAME TABLE to rename the old table to an alternate name and rename the new table to its desired name.
And, as #a_horse_with_no_name indicated in the above comment, all of these steps are documented in the Derby documentation on the Apache website.