I am trying to solve this extra credit problem for my homework. So we haven't learned about this yet, but I thought I would give it a try because extra credit is always good. I am trying to write an ALTER TABLE statement to add a column to a table. The full definition is here.
Use the ALTER TABLE command to add a field to the table called rank
that is of type smallint. We’ll use this field to store a ranking of
the teams. The team with the highest points value will be ranked
number 1; the team with the second highest points value will be
ranked number 2; etc. Write a PL/pgSQL function named update rank
that updates the rank field to contain the appropriate number for
all teams. (There are both simple and complicated ways of doing this.
Think about how it can be done with very little code.) Then, define a
trigger named tr update rank that fires after an insert or update
of any of the fields {wins, draws}. This trigger should be executed
once per statement (not per row).
The table that I am using is
Table "table.group_standings"
Column | Type | Modifiers
--------+-----------------------+-----------
team | character varying(25)| not null
wins | smallint | not null
losses | smallint | not null
draws | smallint | not null
points | smallint | not null
Indexes:
"group_standings_pkey" PRIMARY KEY, btree (team)
Check constraints:
"group_standings_draws_check" CHECK (draws >= 0)
"group_standings_losses_check" CHECK (losses >= 0)
"group_standings_points_check" CHECK (points >= 0)
"group_standings_wins_check" CHECK (wins >= 0)
heres my code
ALTER TABLE group_standings ADD COLUMN rank smallint;
I need help with writing the function to rank the teams
Related
I wonder if it's possible to write a constraint that would make ranges unique. These ranges are represented as two string-typed columns bottom and top. Say, If I have the following row in a database,
| id | bottom | top |
|----|--------|-------|
| 1 | 10000 | 10999 |
inserting the row (2, 10100, 10200) would immediately result in constraint violation error.
P.S
I can't switch to integers, unfortunately -- only strings
Never store numbers as strings, and always use a range data type like int4range to store ranges. With ranges, you can easily use an exclusion constraint:
ALTER TABLE tab ADD EXCLUDE USING gist (bottom_top WITH &&);
Here, bottom_top is a range data type.
If you have to stick with the broken data model using two string columns, you can strip # characters and still have an exclusion constraint with
ALTER TABLE tab ADD EXCLUDE USING gist (
int4range(
CAST(trim(bottom, '#') AS integer),
CAST(trim(top, '#') AS integer),
'[]'
) WITH &&
);
Is it possible to generate a unique constraint in SQL that will allow a single user (user_id) up to two entries that are enabled (enabled)? An example is as follows
user_id | enabled
------------------
123 | true
123 | true
123 | false
456 | true
The above would be valid, but adding another user_id = 123 and enabled = true would fail because there would be three entries. Additionally adding user_id = 123 and enabled = false would be valid because the table would still satisfy the rule.
You could make it work by adding another boolean column to the UNIQUE or PRIMARY KEY constraint (or UNIQUE index):
CREATE TABLE tbl (
user_id int
, enabled bool
, enabled_first bool DEFAULT true
, PRIMARY KEY (user_id, enabled, enabled_first)
);
enabled_first tags the first of each instance with true. I made it DEFAULT true to allow simple insert for the first enabled per user_id - without mentioning the added enabled_first. An explicit enabled_first = false is required to insert a second instance.
NULL values are excluded automatically by the PK constraint I used. Be aware that a simple UNIQUE constraint still allows NULL values, working around your desired constraint. You would have to define all three columns NOT NULL additionally. See:
Allow null in unique column
db<>fiddle here
Of course, now the two true / false values are different internally, and you need to adjust write operations. This may or may not be acceptable. May even be desirable.
Welcome side-effect: Since the minimum payload (actual data size) is 8 bytes per index tuple, and boolean occupies 1 byte without requiring alignment padding, the index is still the same minimum size as for just (user_id, enabled).
Similar for the table: the added boolean does not increase physical storage. (May not apply for tables with more columns.) See:
Calculating and saving space in PostgreSQL
Is a composite index also good for queries on the first field?
You cannot allow two values of "enabled". But here is a solution that comes close to what you want without using triggers. The idea is to encode the value as numbers and enforce uniqueness on two of the values:
create table t (
user_id int,
enabled_code int,
is_enabled boolean as (enabled_code <> 0),
check (enabled_code in (0, 1, 2))
);
create unique index unq_t_enabled_code_1
on t(user_id, enabled_code)
where enabled_code = 1;
create unique index unq_t_enabled_code_2
on t(user_id, enabled_code)
where enabled_code = 2;
Inserting new values is a bit tricky, because you need to check if the value goes in slot "1" or "2". However, you can use is_enabled as the boolean value for querying.
It has been explained already that a constraint or unique index only cannot enforce the logic that you want.
An alternative approach would be to use a materialized view. The logic is to use window functions to create an additional column in the view that resets every two rows having the same (user_id, enabled). You can then put a unique partial index on that column. Finally, you can create a trigger that refreshes the view everytime a record is inserted or updated, which effectively enforces the unique constraint.
-- table set-up
create table mytable(user_id int, enabled boolean);
-- materialized view set-up
create materialized view myview as
select
user_id,
enabled,
(row_number() over(partition by user_id, enabled) - 1) % 2 rn
from mytable;
-- unique partial index that enforces integrity
create unique index on myview(user_id, rn) where(enabled);
-- trigger code
create or replace function refresh_myview()
returns trigger language plpgsql
as $$
begin
refresh materialized view myview;
return null;
end$$;
create trigger refresh_myview
after insert or update
on mytable for each row
execute procedure refresh_myview();
With this set-up in place, let's insert the initial content:
insert into mytable values
(123, true),
(123, true),
(234, false),
(234, true);
This works, and the content of the view is now:
user_id | enabled | rn
------: | :------ | -:
123 | t | 0
123 | t | 1
234 | f | 0
234 | t | 0
Now if we try to insert a row that violates the constraint, an error is raised, and the insert is rejected.
insert into mytable values(123, true);
-- ERROR: could not create unique index "myview_user_id_rn_idx"
-- DETAIL: Key (user_id, rn)=(123, 0) is duplicated.
-- CONTEXT: SQL statement "refresh materialized view myview"
-- PL/pgSQL function refresh_myview() line 3 at SQL statement
Demo on DB Fiddle
I have some data elements containing a timestamp and information about Item X sales related to this timestamp.
e.g.
timestamp | items X sold
------------------------
1 | 10
4 | 40
7 | 20
I store this data in an SQLite table. Now I want to add to this table. Especially if I get data about another item Y.
The item Y data might or might not have different timestamps but I want to insert this data into the existing table so that it looks like this:
timestamp | items X sold | items Y sold
------------------------------------------
1 | 10 | 5
2 | NULL | 10
4 | 40 | NULL
5 | NULL | 3
7 | 20 | NULL
Later on additional sales data (columns) must be added with the same scheme.
Is there an easy way to accomplish this with SQLite?
In the end I want to fetch data by timestamp and get an overview which items were sold at this time. Most examples consider the usecase to add a complete row (one record) or a complete column if it perfectly matches to the other columns.
Or is sqlite the wrong tool at all? And I should rather use csv or excel?
(Using pythons sqlite3 package to create and manipulate the DB)
Thanks!
Dynamically adding columns is not a good design. You could add them using
ALTER TABLE your_table ADD COLUMN the_column_name TEXT
the column, for existing rows would be populated with nulls, although you could specify a DEFAULT value and the existing rows would then be populated with that value.
e.g. the following demonstrates the above :-
DROP TABLE IF EXISTS soldv1;
CREATE TABLE IF NOT EXISTS soldv1 (timestamp INTEGER PRIAMRY KEY, items_sold_x INTEGER);
INSERT INTO soldv1 VALUES(1,10),(4,40),(7,20);
SELECT * FROM soldv1 ORDER BY timestamp;
ALTER TABLE soldv1 ADD COLUMN items_sold_y INTEGER;
UPDATE soldv1 SET items_sold_y = 5 WHERE timestamp = 1;
INSERT INTO soldv1 VALUES(2,null,10),(5,null,3);
SELECT * FROM soldv1 ORDER BY timestamp;
resulting in the first query returning :-
and the second query returning :-
However, as stated, the above is not considered a good design as the schema is dynamic.
You could alternately manage an equivalent of the above with the addition of either a new column (to also be part of the primary key) or by prefixing/suffixing the timestamp with a type.
Consider, as an example, the following :-
DROP TABLE IF EXISTS soldv2;
CREATE TABLE IF NOT EXISTS soldv2 (type TEXT, timestamp INTEGER, items_sold INTEGER, PRIMARY KEY(timestamp,type));
INSERT INTO soldv2 VALUES('x',1,10),('x',4,40),('x',7,20);
INSERT INTO soldv2 VALUES('y',1,5),('y',2,10),('y',5,3);
INSERT INTO soldv2 VALUES('z',1,15),('z',2,5),('z',9,25);
SELECT * FROM soldv2 ORDER BY timestamp;
This has replicated, data-wise, your original data and additionally added another type (column items_sold_z) without having to change the table's schema (nor having the additional complication of needing to update rather than insert as per when applying timestamp 1 items_sold_y 5).
The result from the query being :-
Or is sqlite the wrong tool at all? And I should rather use csv or excel?
SQLite is a valid tool. What you then do with the data can probably be done as easy as in excel (perhaps simpler) and probably much simpler than trying to process the data in csv format.
For example, say you wanted the total items sold per timestamp and how many types were sold then :-
SELECT timestamp, count(items_sold) AS number_of_item_types_sold, sum(items_sold) AS total_sold FROM soldv2 GROUP by timestamp ORDER BY timestamp;
would result in :-
I am using MS SQL Server 2012. I have this bit of SQL:
alter table SomeTable
add Active bit not null default 1
In some environments the default value is applied to existing rows and in other environments we have to add an update script to set the new field to 1. Naturally I am thinking that the difference is a SQL Server setting but my searches thus far are not suggesting which one. Any suggestions?
Let me know if the values of particular settings are desired.
Edit: In the environments that don't apply the default the existing rows are set to 0, which at least conforms to the NOT NULL.
If you add the column as not null it will be set to the default value for existing rows.
If you add the column as null it will be null despite having a default constraint when added to the table.
For example:
create table SomeTable (id int);
insert into SomeTable values (1);
alter table SomeTable add Active_NotNull bit not null default 1;
alter table SomeTable add Active_Null bit null default 1;
select * from SomeTable;
returns:
+----+----------------+-------------+
| id | Active_NotNull | Active_Null |
+----+----------------+-------------+
| 1 | 1 | NULL |
+----+----------------+-------------+
dbfiddle.uk demo: http://dbfiddle.uk/?rdbms=sqlserver_2016&fiddle=c4aeea808684de48097ff44d391c9954
Default value will be applied to existing row to avoid violation of "NOT NULL" constraint.
I have a table where the results are sorted using an "ORDER" column, eg:
Doc_Id Doc_Value Doc_Order
1 aaa 1
12 xxx 5
2 bbb 12
3 ccc 24
My issue is to initially set up this order column as efficiently and reusably as possible.
My initial take was to set up a scalar function that could be used as a default value when a new entry is added to the table:
ALTER FUNCTION [dbo].[Documents_Initial_Order]
( )
RETURNS int
AS
BEGIN
RETURN (SELECT ISNULL(MAX(DOC_ORDER),0) + 1 FROM dbo.Documents)
When a user wants to permute 2 documents, I can then easily switch the 2 orders.
It works nicely, but I now have a second table I need to set up the same way, and I am quite sure there is a nicer way to do it. Any idea?
Based on your comment, I think you have a very workable solution. You could make it a little more userfriendly by specifying it as a default:
alter table documents
add constraint constraint_name
default (dbo.documents_initial_order()) for doc_order
As an alternative, you could create an update trigger that copies the identity field to the doc_order field after an insert:
create trigger Doc_Trigger
on Documents
for insert
as
update d
set d.doc_order = d.doc_id
from Documents d
inner join inserted i on i.doc_id = d.doc_id
Example defining doc_id as an identity column:
create table Documents (
doc_id int identity primary key,
doc_order int,
doc_value ntext
)
It sounds like you want an identity column that you can then override once it gets it initial value. One solution would be to have two columns, once call "InitialOrder", that is an auto-increment identity column, and then a second column called doc_order that initially is set to the same value as the InitialOrder field (perhaps even as part of the insert trigger or a stored procedure if you are doing inserts that way), but give the user the ability to edit that column.
It does require an extra few bytes per record, but solves your problem, and if its of any value at all, you would have both the inital document order and the user-reset order available.
Also, I am not sure if your doc_order needs to be unique or not, but if not, you can then sort return values by doc_order and InitialOrder to ensure a consistent return sequence.
If there is no need to have any control over what that DOC_ORDER value might be, try using an identity column.