Adding column to sqlite database and distribute rows based on primary key - sql

I have some data elements containing a timestamp and information about Item X sales related to this timestamp.
e.g.
timestamp | items X sold
------------------------
1 | 10
4 | 40
7 | 20
I store this data in an SQLite table. Now I want to add to this table. Especially if I get data about another item Y.
The item Y data might or might not have different timestamps but I want to insert this data into the existing table so that it looks like this:
timestamp | items X sold | items Y sold
------------------------------------------
1 | 10 | 5
2 | NULL | 10
4 | 40 | NULL
5 | NULL | 3
7 | 20 | NULL
Later on additional sales data (columns) must be added with the same scheme.
Is there an easy way to accomplish this with SQLite?
In the end I want to fetch data by timestamp and get an overview which items were sold at this time. Most examples consider the usecase to add a complete row (one record) or a complete column if it perfectly matches to the other columns.
Or is sqlite the wrong tool at all? And I should rather use csv or excel?
(Using pythons sqlite3 package to create and manipulate the DB)
Thanks!

Dynamically adding columns is not a good design. You could add them using
ALTER TABLE your_table ADD COLUMN the_column_name TEXT
the column, for existing rows would be populated with nulls, although you could specify a DEFAULT value and the existing rows would then be populated with that value.
e.g. the following demonstrates the above :-
DROP TABLE IF EXISTS soldv1;
CREATE TABLE IF NOT EXISTS soldv1 (timestamp INTEGER PRIAMRY KEY, items_sold_x INTEGER);
INSERT INTO soldv1 VALUES(1,10),(4,40),(7,20);
SELECT * FROM soldv1 ORDER BY timestamp;
ALTER TABLE soldv1 ADD COLUMN items_sold_y INTEGER;
UPDATE soldv1 SET items_sold_y = 5 WHERE timestamp = 1;
INSERT INTO soldv1 VALUES(2,null,10),(5,null,3);
SELECT * FROM soldv1 ORDER BY timestamp;
resulting in the first query returning :-
and the second query returning :-
However, as stated, the above is not considered a good design as the schema is dynamic.
You could alternately manage an equivalent of the above with the addition of either a new column (to also be part of the primary key) or by prefixing/suffixing the timestamp with a type.
Consider, as an example, the following :-
DROP TABLE IF EXISTS soldv2;
CREATE TABLE IF NOT EXISTS soldv2 (type TEXT, timestamp INTEGER, items_sold INTEGER, PRIMARY KEY(timestamp,type));
INSERT INTO soldv2 VALUES('x',1,10),('x',4,40),('x',7,20);
INSERT INTO soldv2 VALUES('y',1,5),('y',2,10),('y',5,3);
INSERT INTO soldv2 VALUES('z',1,15),('z',2,5),('z',9,25);
SELECT * FROM soldv2 ORDER BY timestamp;
This has replicated, data-wise, your original data and additionally added another type (column items_sold_z) without having to change the table's schema (nor having the additional complication of needing to update rather than insert as per when applying timestamp 1 items_sold_y 5).
The result from the query being :-
Or is sqlite the wrong tool at all? And I should rather use csv or excel?
SQLite is a valid tool. What you then do with the data can probably be done as easy as in excel (perhaps simpler) and probably much simpler than trying to process the data in csv format.
For example, say you wanted the total items sold per timestamp and how many types were sold then :-
SELECT timestamp, count(items_sold) AS number_of_item_types_sold, sum(items_sold) AS total_sold FROM soldv2 GROUP by timestamp ORDER BY timestamp;
would result in :-

Related

Generating a decrementing ID while inserting data on a Teradata table

I'm trying to insert data from a query (or a volatile table) to another table which has a id column ( with only type smallint and not null constraint) which should be unique, on Teradata using teradata SQL Assistant the min(id) = -5 and i should insert the new data with a lower id.
This is a simple example:
table a
id| aa |bb
-3|text |text_2
-5|text_3|text_4
and the data i should insert is for example :
aa | bb
text_5|text_6
text_7|text_8
text_9|text_10
so the result should be like
id| aa |bb
-3|text |text_2
-5|text_3|text_4
-6|text_5|text_6
-7|text_7|text_8
-8|text_9|text_10
I tried to pass by creating volatile table with a generated id start by -5 increment by -1 no cycle.
But I get an error
Expected something like a name or a unicode delimited identifier or a cycle keyword between an integer and ','
There is any other way to do it please ?

I have a table where I need to update or insert depending on field paramaters

I have spent many hours researching this problem and trying various solutions but I never quite find a suitable solution for my specific problem. I am new to SQL and some of the examples are confusing as well.
So here is my dilemma. I have a equipment table that tracks oil changes for specific units in a database. The table looks like this:
**id UnitID Posted_On Date_Completed Note OverDueBy**
1 BT-109F 2019-02-04 2019-02-14 Hrs Overdue 23
1 BT-108G 2020-01-17 2020-01-22 Days Overdue 12
1 BT-122K 2020-01-02 2020-01-16 Days Overdue 12
1 BT-109F 2019-02-04 Days Overdue 3
The example records above need to be created or updated by the query. The date completed is entered manually by the technician when he has completed the oil change.
What I want the query to do is, Check to see if a specific Unit has a record where the 'Date_Completed' field is empty, and if so update the 'OverDueBy' field to reflect the new value. If all the records for the specified Unit have the 'Date_Completed' fields filled in, then the query should create a new record will all fields filled in except for the 'Date_Completed' field.
Can anyone help me construct such a query?
Thanks
Clan
First create a unique partial index for the column UnitID:
CREATE UNIQUE INDEX idx_unit ON tablename(UnitID)
WHERE Date_Completed IS NULL;
so that only 1 row with Date_Completed=null is allowed for each UnitID.
So a statement like this:
INSERT INTO tablename(id, UnitID, Posted_On, Date_Completed, Note, OverDueBy)
VALUES (?, 'BT-109F', ?, null, ?, ?)
ON CONFLICT(UnitID) WHERE Date_Completed IS NULL DO UPDATE
SET OverDueBy = ?;
will insert the new values only if there is no row already for UnitID='BT-109F' with null in Date_Completed.
But if there is such a row then it will update the column OverDueBy.
I'm not sure what values you want to insert or what will be the updated value so replace the ? with the appropriate values.
Firstly I would use a view rather than a table to store any calculated data - it reduces storage overheads and will update the calculation every time the view is opened.
If you're using SQLite you should be able to get the overdue by subtracting the Posted_On from its function to return today's date something like date('now') or julianday('now') - read up on and test the functions to ensure it does what you want.
So along the lines of:-
create view MyView as select *, julianday('now') - julianday(Posted_On) as OverDueBy from ClansTable where Date_Completed is null;
If you want to store a snapshot you can always create a table from a view in any case:-
create table MyStoredOverduesOn4thFeb as select * from MyView;
You can find your units that have all Date_Completed and create a single new record like so:-
Create table CompletedUnits as select id, UnitID, max(posted_on) as latest_posted_on, '' as Date_Completed from ClansTable group by id, UnitID having count(*) = count(Date_Complete);
Test this SQL and see if you can get it working - note I've created a text field for the date. Apparently there is no date/datetime data type as such:-
https://www.sqlitetutorial.net/sqlite-date/
Hope this helps,
Phil
I think you need something like this:
MERGE INTO EQUIPMENT A
USING (SELECT * FROM EQUIPMENT B WHERE DATE_COMPLETED IS NULL) C
ON (A.UNITID=C.UNITID)
WHEN MATCHED THEN UPDATE SET A.OVERDUEBY="new value"
WHEN NOT MATCHED THEN INSERT (A.id,A.UnitID,A.Posted_On,A.Date_Completed,A.Note,A.OverDueBy)
VALUES (C.id,C.UnitID,C.Posted_On,NULL,C.Note,C.OverDueBy)
Not sure where new values from update will come from. It's not clear in your question. But something like this could work.

SQL Moving Row to Identical Table WITHOUT auto-increment (SQL Server 2008)

I have two tables - "RENTED" and "HISTORY." Once an item is returned I need to move it to the "HISTORY" table using a procedure. The tables are identical in every way. The primary key is just a number, but is NOT auto-incremented. When I try to move a row from Rented to History, I get a clash because the primary keys both have the number 2 for an ID number. I know I just need to find the max value of the primary key in the HISTORY table, then add the row after. Seemed easy, ended up being hard to do. Lastly, I delete the row from the RENTED Table, which I am able to do. Please assist me with the row movement. Thanks!
Also, I looked at some other similar code samples/answers here, but didn't find a solution quite yet.
Create Procedure spMoveToHistory
#RENTED_OUT_NUM bigint
AS
Begin
Insert Into HISTORY
Select *
From RENTED_OUT
Where RENTED_OUT_NUM = #RENTED_OUT_NUM
Select #RENTED_OUT_NUM = (MAX(HISTORY_NUM)+1)
From HISTORY
Delete From RENTED
Where RENTED_OUT_NUM = #RENTED_OUT_NUM
End
So in this procedure, I just want to enter the number 2 and take the 2nd record in the RENTED table and move over to the HISTORY table's next available row. See below for better visualization of the tables (a few columns omitted)
**RENTED TABLE:**
RENTED_OUT_ID (PK) | ITEM_NAME | ITEM_DESC | DATE_RENTED | DATE_RETURNED
1 data data data data
2 move this data data data data
3 data data data data
**HISTORY TABLE:**
HISTORY_NUM (PK) | ITEM_NAME | ITEM_DESC | DATE_RENTED | DATE_RETURNED
1 data data data data
2 data data data data
-> INSERT HERE
You can use the OUTPUT INTO clause to insert the deleted record into the history table in one go. The syntax will be this:
declare #max_id bigint
select #max_id = max(HISTORY_NUM)+1 from history
DELETE FROM rented
OUTPUT #max_id
, DELETED.ITEM_NAME
, DELETED.ITEM_DESC
, DELETED.DATE_RENTED
, DELETED.DATE_RETURNED
INTO history
WHERE RENTED_OUT_NUM = #RENTED_OUT_NUM
The problem is occurring while inserting in HISTORY_NUM column of HISTORY table because its a primary key and cannot take repetitive value of RENTED_OUT_ID column of RENTED table.
So find max existing HISTORY_NUM column value and increment by one to insert new record every time while moving records.
Create Procedure spMoveToHistory
#RENTED_OUT_NUM bigint
AS
Begin
DECLARE #HISTORY_ID BIGINT
SELECT #HISTORY_ID = MAX(HISTORY_NUM) FROM HISTORY
Insert Into HISTORY(HISTORY_NUM, ITEM_NAME, ITEM_DESC, DATE_RENTED, DATE_RETURNED)
Select #HISTORY_ID + 1 , ITEM_NAME, ITEM_DESC, DATE_RENTED, DATE_RETURNED
From RENTED_OUT
Where RENTED_OUT_ID = #RENTED_OUT_NUM
Delete From RENTED
Where RENTED_OUT_NUM = #RENTED_OUT_NUM
End

Using ALTER TABLE command in psql to add to a table

I am trying to solve this extra credit problem for my homework. So we haven't learned about this yet, but I thought I would give it a try because extra credit is always good. I am trying to write an ALTER TABLE statement to add a column to a table. The full definition is here.
Use the ALTER TABLE command to add a field to the table called rank
that is of type smallint. We’ll use this field to store a ranking of
the teams. The team with the highest points value will be ranked
number 1; the team with the second highest points value will be
ranked number 2; etc. Write a PL/pgSQL function named update rank
that updates the rank field to contain the appropriate number for
all teams. (There are both simple and complicated ways of doing this.
Think about how it can be done with very little code.) Then, define a
trigger named tr update rank that fires after an insert or update
of any of the fields {wins, draws}. This trigger should be executed
once per statement (not per row).
The table that I am using is
Table "table.group_standings"
Column | Type | Modifiers
--------+-----------------------+-----------
team | character varying(25)| not null
wins | smallint | not null
losses | smallint | not null
draws | smallint | not null
points | smallint | not null
Indexes:
"group_standings_pkey" PRIMARY KEY, btree (team)
Check constraints:
"group_standings_draws_check" CHECK (draws >= 0)
"group_standings_losses_check" CHECK (losses >= 0)
"group_standings_points_check" CHECK (points >= 0)
"group_standings_wins_check" CHECK (wins >= 0)
heres my code
ALTER TABLE group_standings ADD COLUMN rank smallint;
I need help with writing the function to rank the teams

SQL - keep values with UPDATE statement

I have a table "news" with 10 rows and cols (uid, id, registered_users, ....) Now i have users that can log in to my website (every registered user has a user id). The user can subscribe to a news on my website.
In SQL that means: I need to select the table "news" and the row with the uid (from the news) and insert the user id (from the current user) to the column "registered_users".
INSERT INTO news (registered_users)
VALUES (user_id)
The INSERT statement has NO WHERE clause so i need the UPDATE clause.
UPDATE news
SET registered_users=user_id
WHERE uid=post_news_uid
But if more than one users subscribe to the same news the old user id in "registered_users" is lost....
Is there a way to keep the current values after an sql UPDATE statement?
I use PHP (mysql). The goal is this:
table "news" row 5 (uid) column "registered_users" (22,33,45)
--- 3 users have subscribed to the news with the uid 5
table "news" row 7 (uid) column "registered_users" (21,39)
--- 2 users have subscribed to the news with the uid 7
It sounds like you are asking to insert a new user, to change a row in news from:
5 22,33
and then user 45 signs up, and you get:
5 22,33,45
If I don't understand, let me know. The rest of this solution is an excoriation of this approach.
This is a bad, bad, bad way to store data. Relational databases are designed around tables that have rows and columns. Lists should be represented as multiple rows in a table, and not as string concatenated values. This is all the worse, when you have an integer id and the data structure has to convert the integer to a string.
The right way is to introduce a table, say NewsUsers, such as:
create table NewsUsers (
NewsUserId int identity(1, 1) primary key,
NewsId int not null,
UserId int not null,
CreatedAt datetime default getdaete(),
CreatedBy varchar(255) default sysname
);
I showed this syntax using SQL Server. The column NewsUserId is an auto-incrementing primary key for this table. The columns NewsId is the news item (5 in your first example). The column UserId is the user id that signed up. The columns CreatedAt and CreatedBy are handy columns that I put in almost all my tables.
With this structure, you would handle your problem by doing:
insert into NewsUsers
select 5, <userid>;
You should create an additional table to map users to news they have registeres on
like:
create table user_news (user_id int, news_id int);
that looks like
----------------
| News | Users|
----------------
| 5 | 22 |
| 5 | 33 |
| 5 | 45 |
| 7 | 21 |
| ... | ... |
----------------
Then you can use multiple queries to first retrieve the news_id and the user_id and store them inside variables depending on what language you use and then insert them into the user_news.
The advantage is, that finding all users of a news is much faster, because you don't have to parse every single idstring "(22, 33, 45)"
It sounds like you want to INSERT with a SELECT statement - INSERT with SELECT
Example:
INSERT INTO tbl_temp2 (fld_id)
SELECT tbl_temp1.fld_order_id
FROM tbl_temp1
WHERE tbl_temp1.fld_order_id > 100;