Can these three SQLITE INSERTS be combinded or improved? - sql

I have three tables:
CREATE TABLE "local" ("id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL , "serialNumber" TEXT, "location" TEXT)
CREATE TABLE "setups" ("id" INTEGER PRIMARY KEY NOT NULL ,"hold" TEXT,"mode" INTEGER,"setTemp" REAL,"maxSTemp" REAL,"minSTemp" REAL,"units" TEXT,"heat" INTEGER,"heatMode" INTEGER,"fanMode" INTEGER,"fan" INTEGER,"cool" INTEGER)
CREATE TABLE "data" ("id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL ,"humidity" REAL,"time" INTEGER,"filtChng" INTEGER,"indoorTemp" REAL,"outdoorTemp" REAL, "setups_id" INTEGER, "local_id" INTEGER)
Everytime I get a new entry I execute:
INSERT INTO local ('serialNumber') SELECT 'XXXX' WHERE NOT EXISTS (SELECT * FROM local WHERE serialNumber='XXXX')"
INSERT INTO setups ('hold','mode','setTemp','maxSTemp','minSTemp','units','heat','heatMode','fanMode','fan','cool') SELECT '00',1,74.0,74.0,74.0,'F',1,1,1,1,1 WHERE NOT EXISTS (SELECT * FROM setups WHERE hold='00' AND mode=1 AND setTemp=74.0 AND maxSTemp=74.0 AND minSTemp=74.0 AND units='F' AND heat=1 AND heatMode=1 AND fanMode=1 AND fan=1 AND cool=1)
INSERT INTO data ('humidity','filtChng','time','indoorTemp','outdoorTemp',local_id,setups_id) SELECT 74.0,111111111,100,74.0,74.0,local.id,setups.id FROM local CROSS JOIN setups WHERE local.serialNumber='XXXX' AND setups.hold='00' AND setups.mode=1 AND setups.setTemp=74.0 AND setups.maxSTemp=74.0 AND setups.minSTemp=74.0 AND setups.units='F' AND setups.heat=1 AND setups.heatMode=1 AND setups.fanMode=1 AND setups.fan=1 AND setups.cool=1
What I am doing works, but seems slow and redundant/inefficient...

Well, you can remove the "where not exists" part from the "local" insert if you use a unique constraint on the "serialNumber" field. Be careful, this will throw a constraint violation instead of just not inserting the row. So be sure to handle that in the application.
And though I assume it is, be sure that checking for duplicates is really necessary in your app.

Related

Update Postgres SQL table with SERIAL from previous insert [duplicate]

This question already has answers here:
Insert a value from a table in another table as foreign key
(3 answers)
Closed 4 months ago.
Very new to SQL in general, working on creating 2 Tables, 1 for example representing appliances with a primary key, second representing a microwave for example with its FK referencing the primary tables PK.
I'm using SERIAL as the id for the primary table, but don't know how to update or insert into the second table using that specific generated value from the first.
I've created my tables using PSQL (Postgres15) like so:
CREATE TABLE Appliances (
id SERIAL NOT NULL,
field1 integer NOT NULL DEFAULT (0),
--
PRIMARY KEY (id),
UNIQUE(id)
);
CREATE TABLE Microwaves (
id integer NOT NULL,
field1 integer,
--
PRIMARY KEY (id),
FOREIGN KEY (id) REFERENCES Appliances(id)
);
Inserting my first row into the Appliance table:
INSERT INTO Appliances(field1) VALUES(1);
SELECT * FROM Appliances;
Yields:
And a query I found somewhere pulls the current increment of the SERIAL:
SELECT currval(pg_get_serial_sequence('Appliances', 'id'));
Yields:
I'm struggling to determine how to format the INSERT statement, have tried several variations around the below input:
INSERT INTO Microwaves VALUES(SELECT currval(pg_get_serial_sequence('Appliances', 'id'), 1));
Yields:
Appreciate feedback on solving the problem as represented, or a better way to tackle this in general.
Okay looks like I stumbled on at least one solution that works in my case as taken from https://stackoverflow.com/a/50004699/3564760
DO $$
DECLARE appliance_id integer;
BEGIN
INSERT INTO Appliances(field1) VALUES('appliance2') RETURNING id INTO appliance_id;
INSERT INTO Microwaves(id, field2) VALUES(appliance_id, 100);
END $$;
Still open to other answers if this isn't ideal.

How to insert data from one table into another as PostgreSQL array?

I have the following tables:
CREATE TABLE "User" (
id integer DEFAULT nextval('"User_id_seq"'::regclass) PRIMARY KEY,
name text NOT NULL DEFAULT ''::text,
coinflips boolean[]
);
CREATE TABLE "User_coinflips_COPY" (
"nodeId" integer,
position integer,
value boolean,
id integer DEFAULT nextval('"User_coinflips_COPY_id_seq"'::regclass) PRIMARY KEY
);
I'm no looking for the SQL statement that grabs the value entry from each row in User_coinflips and inserts it as an array into the coinflips column on User.
Any help would be appreciated!
Update
Not sure if it's important but I just realized a minor mistake in my table definitions above, I replace User_coinflips with User_coinflips_COPY since that accurately describes my schema. Just for context, before it looked like this:
CREATE TABLE "User_coinflips" (
"nodeId" integer REFERENCES "User"(id) ON DELETE CASCADE,
position integer,
value boolean NOT NULL,
CONSTRAINT "User_coinflips_pkey" PRIMARY KEY ("nodeId", position)
);
You are looking for an UPDATE, rather then insert.
Use a derived table with the aggregated values to join against in the UPDATE statement:
update "User"
set conflips = t.flips
from (
select "nodeId", array_agg(value order by position) as flips
from "User_coinflips"
group by "nodeId"
) t
where t."nodeId" = "User"."nodeId";

How do I check the value of a foreign key on insert?

I'm teaching myself SQL using Sqlite3, well suited for my forever-game project (Don't we all have one?) and have the following tables:
CREATE TABLE equipment_types (
row_id INTEGER PRIMARY KEY,
type TEXT NOT NULL UNIQUE);
INSERT INTO equipment_types (type) VALUES ('gear'), ('weapon');
CREATE TABLE equipment_names (
row_id INTEGER PRIMARY KEY,
name TEXT NOT NULL UNIQUE);
INSERT INTO equipment_names (name) VALUES ('club'), ('band aids');
CREATE TABLE equipment (
row_id INTEGER PRIMARY KEY,
name INTEGER NOT NULL UNIQUE REFERENCES equipment_names,
type INTEGER NOT NULL REFERENCES equipment_types);
INSERT INTO equipment (name, type) VALUES (1, 2), (2, 1);
So now we have a 'club' that is a 'weapon', and 'band aids' that are 'gear'. I now want to make a weapons table; it will have an equipment_id that references the equipment table and weapon properties like damage and range, etc. I want to constrain it to equipment that is a 'weapon' type.
But for the life of me I can't figure it out. CHECK, apparently, only allows expressions, not subqueries, and I've been trying to craft a TRIGGER that might do the job, but in short, I can't quite figure out the query and syntax, or how to check the result that as I understand it will be in the form of a table, or null.
Also, are there good online resources for learning SQL more advanced than W3School? Add them as a comment, please.
Just write a query that looks up the type belonging to the new record:
CREATE TRIGGER only_weapons
BEFORE INSERT ON weapons
FOR EACH ROW
WHEN (SELECT et.type
FROM euqipment_types AS et
JOIN equipment AS e ON e.type = et.equipment_type_id
WHERE e.row_id = NEW.equipment_id
) != 'weapon'
BEGIN
SELECT RAISE(FAIL, "not a weapon");
END;
The foreign key references should be to the primary key and to the same time. I would phrase this as:
CREATE TABLE equipment_types (
equipment_type_id INTEGER PRIMARY KEY,
type TEXT NOT NULL UNIQUE
);
INSERT INTO equipment_types (type) VALUES ('gear'), ('weapon');
CREATE TABLE equipment_names (
equipment_name_id INTEGER PRIMARY KEY,
name TEXT NOT NULL UNIQUE
);
INSERT INTO equipment_names (name) VALUES ('club'), ('band aids');
CREATE TABLE equipment (
equipment_id INTEGER PRIMARY KEY,
equipment_name_id INTEGER NOT NULL UNIQUE REFERENCES equipment_names(equipment_name_id),
equipement_type_id INTEGER NOT NULL REFERENCES equipment_types(equipement_type_id)
);
I would not use the name row_id for the primary key. That is the built-inn default, so the name is not very good. In SQLite, an integer primary key is automatically auto-incremented (see here).

SQLITE3: find IDs across multiple tables

I would like to do analysis of what codes appear in multiple tables under certains conditions. However I don't think the database schema suits the task very well but maybe there's something I don't know about that can help me. Here's a simplified schema:
CREATE TABLE "batchDescription" (
id INTEGER NOT NULL,
name TEXT NOT NULL UNIQUE,
PRIMARY KEY (id)
);
CREATE TABLE "simulationDetails" (
id INTEGER NOT NULL,
ko_index_id INTEGER NOT NULL,
batch_description_id INTEGER NOT NULL,
data1 REAL NOT NULL,
data2 INTEGER NOT NULL,
PRIMARY KEY (id)
FOREIGN KEY(ko_index_id) REFERENCES "koIndex" (id)
FOREIGN KEY(batch_description_id) REFERENCES "batchDescription" (id)
);
CREATE TABLE "koIndex" (
id INTEGER NOT NULL,
number_of_kos INTEGER NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE "1kos" (
ko_index_id INTEGER NOT NULL,
ko1 INTEGER NOT NULL,
PRIMARY KEY (ko_index_id)
FOREIGN KEY(ko_index_id) REFERENCES "koIndex" (id)
);
CREATE TABLE "2kos" (
ko_index_id INTEGER NOT NULL,
ko1 INTEGER NOT NULL,
ko2 INTEGER NOT NULL,
PRIMARY KEY (ko_index_id)
FOREIGN KEY(ko_index_id) REFERENCES "koIndex" (id)
);
CREATE TABLE "3kos" (
ko_index_id INTEGER NOT NULL,
ko1 INTEGER NOT NULL,
ko2 INTEGER NOT NULL,
ko3 INTEGER NOT NULL,
PRIMARY KEY (ko_index_id)
FOREIGN KEY(ko_index_id) REFERENCES "koIndex" (id)
);
This goes up to table "525kos" which has ko1 to ko525 in it - ko1 to ko525 are IDs that are primary keys in a table not shown here. I want to do an analysis of how often certain IDs are present under certain conditions. Here is a simple example to illustrate:
I would like to like to count the amount of times a certain ID (let's say 127) (in any koX column) in the "13kos" table occurs when simulationDetails.data1 not equal to 0. I would do this on a database called ko.db from the bash command line like:
for ko_idx in {1..13}; do sqlite3 ko.db "select count(ko${ko_idx}) from '13kos' where ko${ko_idx} = 127 and ko_index_id in (select ko_index_id from simulationDetails where data1 != 0);"; done
Already this is slow and inefficient but is simple compared to what I would like to do. What if I wanted to do an analysis of all the IDs in all possible columns in all "Xkos" tables and compare them to where data1 is equal and not equal to zero?
Can anybody direct me to a better way of doing this or is the schema design just not very good for this kind of analysis and I'll have to give up?
EDIT: Thought I'd add a bit of extra detailto avoid confusion. I suspect that a good way to achieve want I want would be to somehow combine all the "Xkos" tables into one temporary table and then search for certain IDs from that table. How would I combine all 525 ko tables without writing out each table name?
How would I combine all 525 ko tables without writing out each table
name?
Create a table with the same number of columns as the largest table (the table into which you merge) allowing nulls.
query the sqlite_master table using something like :-
SELECT * from sqlite_master WHERE name LIKE '%kos%' AND type = 'table'
Loop through the extracted table names building an INSERT SELECT for each table that will insert the rows from the tables into the table created in 1.
See 2. INSERT INTO table SELECT ...; especially in regard to handling missing columns.
All done, the table created in 1 will be populated accordingly.

Remove Unique constraint on a column in sqlite database

I am trying to remove a UNIQUE constraint on a column for sqlite but I do not have the name to remove the constraint. How can I find the name of the UNIQUE constraint name to remove it.
Below is the schema I see for the table I want to remove the constraint
UNIQUE (datasource_name)
sqlite> .schema datasources
CREATE TABLE "datasources" (
created_on DATETIME NOT NULL,
changed_on DATETIME NOT NULL,
id INTEGER NOT NULL,
datasource_name VARCHAR(255),
is_featured BOOLEAN,
is_hidden BOOLEAN,
description TEXT,
default_endpoint TEXT,
user_id INTEGER,
cluster_name VARCHAR(250),
created_by_fk INTEGER,
changed_by_fk INTEGER,
"offset" INTEGER,
cache_timeout INTEGER, perm VARCHAR(1000), filter_select_enabled BOOLEAN, params VARCHAR(1000),
PRIMARY KEY (id),
CHECK (is_featured IN (0, 1)),
CHECK (is_hidden IN (0, 1)),
FOREIGN KEY(created_by_fk) REFERENCES ab_user (id),
FOREIGN KEY(changed_by_fk) REFERENCES ab_user (id),
FOREIGN KEY(cluster_name) REFERENCES clusters (cluster_name),
UNIQUE (datasource_name),
FOREIGN KEY(user_id) REFERENCES ab_user (id)
);
SQLite only supports limited ALTER TABLE, so you can't remove the constaint using ALTER TABLE. What you can do to "drop" the column is to rename the table, create a new table with the same schema except for the UNIQUE constraint, and then insert all data into the new table. This procedure is documented in the Making Other Kinds Of Table Schema Changes section of ALTER TABLE documentation.
I just ran into this myself. An easy solution was using DB Browser for SQLite
It let me remove a unique constraint with just a checkbox in a gui.
PRAGMA foreign_keys=off;
BEGIN TRANSACTION;
ALTER TABLE table_name RENAME TO old_table;
CREATE TABLE table_name
(
column1 datatype [ NULL | NOT NULL ],
column2 datatype [ NULL | NOT NULL ],
...
);
INSERT INTO table_name SELECT * FROM old_table;
COMMIT;
PRAGMA foreign_keys=on;
Source: https://www.techonthenet.com/sqlite/unique.php
I was just working through this issue on a small database and found it easier to dump the data as SQL statements, it prints out your tables exactly as they are and also adds the INSERT INTO statements to rebuild the DB.
The .help terminal command shows:
.dump ?OBJECTS? Render database content as SQL
and prints the SQL to the terminal, you can update it in a TXT file. For once off changes and tidying this seems like a reasonable solution albeit a little inelegant