I have a table which "indexes" the captured documents for an employer's employees. Each document has got a unique ID, a URI, date captured, staffid (FK) and a document type column named "tag"
gv2=# \d staffdoc
Table "zoozooland_789166.staffdoc"
Column | Type | Modifiers
------------+-----------------------------+-----------------------
id | uuid | not null
asseturi | text | not null
created | timestamp without time zone | not null
created_by | uuid | not null
tag | text | not null
is_active | boolean | not null default true
staffid | uuid | not null
Foreign-key constraints:
"staffdoc_staffid_fkey" FOREIGN KEY (staffid) REFERENCES staffmember(id)
Inherits: assetdoc
I want to write a report that will be used to highlight missing documents.
For each staffmember the report should have a column for each document tag type, so the nr of columns is unknown up front.
Currently I do all of this in the application - generate a list of all possible tags (SELECT DISTINCT tag FROM table), generate a list of all possible staff-IDs, then for each staff ID I run multiple queries to get the document with the biggest value in the created column for each tag value.
I'm pretty sure I should at least be able to optimise it to one query per document type (tag value) (most recent document for each staff id) which would be a good-enough optimisation.
The typical scenario is 4 or 5 document tag values (document types) so running 5 queries is much more acceptable that running 5 X nr-of-staff queries.
In the final report I have the following columns:
Staff-member Name, doctype1, doctype2, doctype2, etc
The name is "joined" from the staffmember table. The value in the doctype columns is the LAST (MAX) value of dates for that doc tag for that staff member, or "None" if the document is missing for that staffmember.
FWIW I'm using Postgres 9.5
Related
I have some data elements containing a timestamp and information about Item X sales related to this timestamp.
e.g.
timestamp | items X sold
------------------------
1 | 10
4 | 40
7 | 20
I store this data in an SQLite table. Now I want to add to this table. Especially if I get data about another item Y.
The item Y data might or might not have different timestamps but I want to insert this data into the existing table so that it looks like this:
timestamp | items X sold | items Y sold
------------------------------------------
1 | 10 | 5
2 | NULL | 10
4 | 40 | NULL
5 | NULL | 3
7 | 20 | NULL
Later on additional sales data (columns) must be added with the same scheme.
Is there an easy way to accomplish this with SQLite?
In the end I want to fetch data by timestamp and get an overview which items were sold at this time. Most examples consider the usecase to add a complete row (one record) or a complete column if it perfectly matches to the other columns.
Or is sqlite the wrong tool at all? And I should rather use csv or excel?
(Using pythons sqlite3 package to create and manipulate the DB)
Thanks!
Dynamically adding columns is not a good design. You could add them using
ALTER TABLE your_table ADD COLUMN the_column_name TEXT
the column, for existing rows would be populated with nulls, although you could specify a DEFAULT value and the existing rows would then be populated with that value.
e.g. the following demonstrates the above :-
DROP TABLE IF EXISTS soldv1;
CREATE TABLE IF NOT EXISTS soldv1 (timestamp INTEGER PRIAMRY KEY, items_sold_x INTEGER);
INSERT INTO soldv1 VALUES(1,10),(4,40),(7,20);
SELECT * FROM soldv1 ORDER BY timestamp;
ALTER TABLE soldv1 ADD COLUMN items_sold_y INTEGER;
UPDATE soldv1 SET items_sold_y = 5 WHERE timestamp = 1;
INSERT INTO soldv1 VALUES(2,null,10),(5,null,3);
SELECT * FROM soldv1 ORDER BY timestamp;
resulting in the first query returning :-
and the second query returning :-
However, as stated, the above is not considered a good design as the schema is dynamic.
You could alternately manage an equivalent of the above with the addition of either a new column (to also be part of the primary key) or by prefixing/suffixing the timestamp with a type.
Consider, as an example, the following :-
DROP TABLE IF EXISTS soldv2;
CREATE TABLE IF NOT EXISTS soldv2 (type TEXT, timestamp INTEGER, items_sold INTEGER, PRIMARY KEY(timestamp,type));
INSERT INTO soldv2 VALUES('x',1,10),('x',4,40),('x',7,20);
INSERT INTO soldv2 VALUES('y',1,5),('y',2,10),('y',5,3);
INSERT INTO soldv2 VALUES('z',1,15),('z',2,5),('z',9,25);
SELECT * FROM soldv2 ORDER BY timestamp;
This has replicated, data-wise, your original data and additionally added another type (column items_sold_z) without having to change the table's schema (nor having the additional complication of needing to update rather than insert as per when applying timestamp 1 items_sold_y 5).
The result from the query being :-
Or is sqlite the wrong tool at all? And I should rather use csv or excel?
SQLite is a valid tool. What you then do with the data can probably be done as easy as in excel (perhaps simpler) and probably much simpler than trying to process the data in csv format.
For example, say you wanted the total items sold per timestamp and how many types were sold then :-
SELECT timestamp, count(items_sold) AS number_of_item_types_sold, sum(items_sold) AS total_sold FROM soldv2 GROUP by timestamp ORDER BY timestamp;
would result in :-
Running a posgresql database.
I have a table with CITEXT columns for case-insensitivity. When I try to update a a CITEXT value to the same word in different casing it does not work. Postgres returns 1 row updated, as it targeted 1 row, but the value is not changed.
Eg
Table Schema - users
Column | Type
___________________________
user_id | PRIMARY KEY SERIAL
user_name | CITEXT
age | INT
example row:
user_id | user_name | age
_________________________________
1 | ayanaMi | 99
SQL command:
UPDATE users SET user_name = 'Ayanami' WHERE user_id = 1
The above command turns 1 UPDATED, but the casing does not change. I assume this is because postgres sees them as the same value.
The docs state:
If you'd like to match case-sensitively, you can cast the operator's arguments to text.
https://www.postgresql.org/docs/9.1/citext.html
I can force a case sensitive search by using CAST as such:
SELECT * FROM users WHERE CAST(user_name AS TEXT) = `Ayanami`
[returns empty row]
Is there a way to force case sensitive updating?
I am trying to solve this extra credit problem for my homework. So we haven't learned about this yet, but I thought I would give it a try because extra credit is always good. I am trying to write an ALTER TABLE statement to add a column to a table. The full definition is here.
Use the ALTER TABLE command to add a field to the table called rank
that is of type smallint. We’ll use this field to store a ranking of
the teams. The team with the highest points value will be ranked
number 1; the team with the second highest points value will be
ranked number 2; etc. Write a PL/pgSQL function named update rank
that updates the rank field to contain the appropriate number for
all teams. (There are both simple and complicated ways of doing this.
Think about how it can be done with very little code.) Then, define a
trigger named tr update rank that fires after an insert or update
of any of the fields {wins, draws}. This trigger should be executed
once per statement (not per row).
The table that I am using is
Table "table.group_standings"
Column | Type | Modifiers
--------+-----------------------+-----------
team | character varying(25)| not null
wins | smallint | not null
losses | smallint | not null
draws | smallint | not null
points | smallint | not null
Indexes:
"group_standings_pkey" PRIMARY KEY, btree (team)
Check constraints:
"group_standings_draws_check" CHECK (draws >= 0)
"group_standings_losses_check" CHECK (losses >= 0)
"group_standings_points_check" CHECK (points >= 0)
"group_standings_wins_check" CHECK (wins >= 0)
heres my code
ALTER TABLE group_standings ADD COLUMN rank smallint;
I need help with writing the function to rank the teams
I have a table "news" with 10 rows and cols (uid, id, registered_users, ....) Now i have users that can log in to my website (every registered user has a user id). The user can subscribe to a news on my website.
In SQL that means: I need to select the table "news" and the row with the uid (from the news) and insert the user id (from the current user) to the column "registered_users".
INSERT INTO news (registered_users)
VALUES (user_id)
The INSERT statement has NO WHERE clause so i need the UPDATE clause.
UPDATE news
SET registered_users=user_id
WHERE uid=post_news_uid
But if more than one users subscribe to the same news the old user id in "registered_users" is lost....
Is there a way to keep the current values after an sql UPDATE statement?
I use PHP (mysql). The goal is this:
table "news" row 5 (uid) column "registered_users" (22,33,45)
--- 3 users have subscribed to the news with the uid 5
table "news" row 7 (uid) column "registered_users" (21,39)
--- 2 users have subscribed to the news with the uid 7
It sounds like you are asking to insert a new user, to change a row in news from:
5 22,33
and then user 45 signs up, and you get:
5 22,33,45
If I don't understand, let me know. The rest of this solution is an excoriation of this approach.
This is a bad, bad, bad way to store data. Relational databases are designed around tables that have rows and columns. Lists should be represented as multiple rows in a table, and not as string concatenated values. This is all the worse, when you have an integer id and the data structure has to convert the integer to a string.
The right way is to introduce a table, say NewsUsers, such as:
create table NewsUsers (
NewsUserId int identity(1, 1) primary key,
NewsId int not null,
UserId int not null,
CreatedAt datetime default getdaete(),
CreatedBy varchar(255) default sysname
);
I showed this syntax using SQL Server. The column NewsUserId is an auto-incrementing primary key for this table. The columns NewsId is the news item (5 in your first example). The column UserId is the user id that signed up. The columns CreatedAt and CreatedBy are handy columns that I put in almost all my tables.
With this structure, you would handle your problem by doing:
insert into NewsUsers
select 5, <userid>;
You should create an additional table to map users to news they have registeres on
like:
create table user_news (user_id int, news_id int);
that looks like
----------------
| News | Users|
----------------
| 5 | 22 |
| 5 | 33 |
| 5 | 45 |
| 7 | 21 |
| ... | ... |
----------------
Then you can use multiple queries to first retrieve the news_id and the user_id and store them inside variables depending on what language you use and then insert them into the user_news.
The advantage is, that finding all users of a news is much faster, because you don't have to parse every single idstring "(22, 33, 45)"
It sounds like you want to INSERT with a SELECT statement - INSERT with SELECT
Example:
INSERT INTO tbl_temp2 (fld_id)
SELECT tbl_temp1.fld_order_id
FROM tbl_temp1
WHERE tbl_temp1.fld_order_id > 100;
PostgreSQL Version 9.1,
i have a table,
xmltest=# \d xmltest
Table "public.xmltest"
Column | Type | Modifiers
---------+---------+-----------
id | integer | not null
xmldata | xml |
Indexes:
"xmltest_pkey" PRIMARY KEY, btree (id)
xmltest=# select * from xmltest;
id | xmldata
----+---------------------------------------
1 | <root> +
| <child1>somedata for child1 </child1>+
| <child2>somedata for child2 </child2>+
| </root>
(1 row)
now how to update the value inside the element/tag child2,
i don't prefer to update the whole column at once,
is their a way to do update/add/delete that particular value of tag, if so please share :)
PostgreSQL XML functions are aimed a producing and processing XML, not so much at manipulating it, I am afraid.
You can extract values with xpath(), there are a number of functions to build XML, but I would not know of built-in functionality to update elements inside a given XML value.