How to select a table dynamically with HSQLDB and Hibernate? - sql

I have a table with references to other tables. Stored is the table name and the entity id.
Like this:
ref_table
id | table_name | refId
-------+------------+-------
1 | test | 6
2 | test | 9
3 | other | 5
Now I try to formulate an SQL/FUNCTION that returns the correct entities from the correct tables. Something like:
SELECT * FROM resolveId(3)
I would expect to get the entity with the id "5" from the table "other". Is this possible? I would guess I can do it with a stored procedure (CREATE FUNCTION). The function would have to inspect the "ref_table" and return the name of the table to use in the SQL statement ... but how exactly?

If you want to use the resuling entities in select statements or joins, you should use CREATE FUNCTION with RETURNS TABLE ( .. )
There is a limitation in HSQLDB routines which disallows dynamically creating SQL. Therefore the body of the CREATE FUNCTION may include a CASE or IF ELSE block that switches to a pre-defined SELECT statement based on the input value (1, 2, 3, ..).
The details of CREATE FUNCTION are documented here:
http://hsqldb.org/doc/2.0/guide/sqlroutines-chapt.html#N12CC4
There is one example for an SQL function with RETURNS TABLE.

Related

Snowflake Reference Table

How to create a reference table in Snowflake that picks up the column name and the column value together as a result to be referenced in another SQL Query ?
Suppose on I have an Item Table ( as shown below - )
| ITEM_NAME|ORDER_TYPE | REGION|
|:--------|:----------:|------:|
| Godan | Return| North|
And I have a set of rules as mentioned below
If Item name is '%NAME% then YES
IF Region is 'NORTH' then NO etc
What would be the best possible way to create a reference table in Snowflake containing the results of these rules which can be later accessed by any other view in Snowflake(Column and Value together) or and ETL Tool ?

How to call a column named "group" in Snowflake?

I have a table in Snowflake with the following structure:
| id | group | subgroup |
_________________________
| 1 | verst | burg |
| 2 | travel| plane |
| 3 | rest | bet |
I need to call only the column "group", so I tried the following code:
select t2.group
from table as t2
but the following error arises
SQL compilation error: syntax error line 1 at position 7 unexpected 'group'. syntax error line 2 at position 0 unexpected 'from'.
I have also tried using:
select group
from table as t2
select "group"
from table as t2
but I always get the same error.
I know I can call the whole table using * but the real table where I get this data from has many more columns and we want to display this data in a dashboard. Additionally, I am not the owner of the table since it is filled by a microservice, so I cannot change the column names and I can't modify the microservice process.
I would appreciate any suggestion.
Given the table could not be created without double quotes, you need to know how it was created to know how to refer to the column. Which is to say it the create code was CREATE TABLE awsome ("GrOuP" string); there you will need to type "GrOuP"
There is also a session setting to ignore case in double quotes that might help.
see QUOTED_IDENTIFIERS_IGNORE_CASE
But by default things are upper case, thus try "GROUP"
Putting group in double quotes worked fine when I tried it:
create or replace temporary table foo ( "group" string );
insert into foo values ('Hello world.');
select "group" from foo;

How to create a trigger that tracks changes to specific columns?

In a PostgreSQL database I have a table called SURVEYS which looks like this:
| ID (uuid) | name (varchar) | status (boolean) | update_at (timestamp) |
|--------------------------------------|----------------|------------------|--------------------------|
| 9bef1274-f1ee-4879-a60e-16e94e88df38 | Doom | 1 | 2019-03-26 00:00:00 |
As you can see, the table has the columns status and update_at.
My task is to create a trigger that will start the function if the user updates the value in status column to 2 and changes the value in the update_at column. In the function I would use the ID of the entry which was changed. I created such a trigger. Do you think is it correct to check column values in the trigger, or do I need to check it in the function? I am little bit confused.
CREATE TRIGGER СHECK_FOR_UPDATES_IN_SURVEYS
BEFORE UPDATE ON SURVEYS
FOR EACH ROW
WHEN
(OLD.update_at IS DISTINCT FROM NEW.update_at)
AND
(OLD.condition IS DISTINCT FROM NEW.condition AND NEW.condition = 2)
EXECUTE PROCEDURE CREATE_SURVEYS_QUESTIONS_RELATIONSHIP(NEW.id);
Your trigger looks just fine.
There is only one slight syntax problem: the whole WHEN clause has to be surrounded by parentheses.
Also, you cannot pass anything but a constant to the trigger function. But you don't have to do that at all: NEW will be available in the trigger function automatically.
So you could write it like this:
CREATE TRIGGER СHECK_FOR_UPDATES_IN_SURVEYS
BEFORE UPDATE ON SURVEYS
FOR EACH ROW
WHEN
(OLD.update_at IS DISTINCT FROM NEW.update_at
AND
OLD.condition IS DISTINCT FROM NEW.condition AND NEW.condition = 2)
EXECUTE PROCEDURE CREATE_SURVEYS_QUESTIONS_RELATIONSHIP();
It is always preferable to check conditions in the trigger definition, because that will save you unnecessary function calls.

Adding column to sqlite database and distribute rows based on primary key

I have some data elements containing a timestamp and information about Item X sales related to this timestamp.
e.g.
timestamp | items X sold
------------------------
1 | 10
4 | 40
7 | 20
I store this data in an SQLite table. Now I want to add to this table. Especially if I get data about another item Y.
The item Y data might or might not have different timestamps but I want to insert this data into the existing table so that it looks like this:
timestamp | items X sold | items Y sold
------------------------------------------
1 | 10 | 5
2 | NULL | 10
4 | 40 | NULL
5 | NULL | 3
7 | 20 | NULL
Later on additional sales data (columns) must be added with the same scheme.
Is there an easy way to accomplish this with SQLite?
In the end I want to fetch data by timestamp and get an overview which items were sold at this time. Most examples consider the usecase to add a complete row (one record) or a complete column if it perfectly matches to the other columns.
Or is sqlite the wrong tool at all? And I should rather use csv or excel?
(Using pythons sqlite3 package to create and manipulate the DB)
Thanks!
Dynamically adding columns is not a good design. You could add them using
ALTER TABLE your_table ADD COLUMN the_column_name TEXT
the column, for existing rows would be populated with nulls, although you could specify a DEFAULT value and the existing rows would then be populated with that value.
e.g. the following demonstrates the above :-
DROP TABLE IF EXISTS soldv1;
CREATE TABLE IF NOT EXISTS soldv1 (timestamp INTEGER PRIAMRY KEY, items_sold_x INTEGER);
INSERT INTO soldv1 VALUES(1,10),(4,40),(7,20);
SELECT * FROM soldv1 ORDER BY timestamp;
ALTER TABLE soldv1 ADD COLUMN items_sold_y INTEGER;
UPDATE soldv1 SET items_sold_y = 5 WHERE timestamp = 1;
INSERT INTO soldv1 VALUES(2,null,10),(5,null,3);
SELECT * FROM soldv1 ORDER BY timestamp;
resulting in the first query returning :-
and the second query returning :-
However, as stated, the above is not considered a good design as the schema is dynamic.
You could alternately manage an equivalent of the above with the addition of either a new column (to also be part of the primary key) or by prefixing/suffixing the timestamp with a type.
Consider, as an example, the following :-
DROP TABLE IF EXISTS soldv2;
CREATE TABLE IF NOT EXISTS soldv2 (type TEXT, timestamp INTEGER, items_sold INTEGER, PRIMARY KEY(timestamp,type));
INSERT INTO soldv2 VALUES('x',1,10),('x',4,40),('x',7,20);
INSERT INTO soldv2 VALUES('y',1,5),('y',2,10),('y',5,3);
INSERT INTO soldv2 VALUES('z',1,15),('z',2,5),('z',9,25);
SELECT * FROM soldv2 ORDER BY timestamp;
This has replicated, data-wise, your original data and additionally added another type (column items_sold_z) without having to change the table's schema (nor having the additional complication of needing to update rather than insert as per when applying timestamp 1 items_sold_y 5).
The result from the query being :-
Or is sqlite the wrong tool at all? And I should rather use csv or excel?
SQLite is a valid tool. What you then do with the data can probably be done as easy as in excel (perhaps simpler) and probably much simpler than trying to process the data in csv format.
For example, say you wanted the total items sold per timestamp and how many types were sold then :-
SELECT timestamp, count(items_sold) AS number_of_item_types_sold, sum(items_sold) AS total_sold FROM soldv2 GROUP by timestamp ORDER BY timestamp;
would result in :-

Keeping a column in sync with another column in Postgres

I'm wondering if it's possible to have a column always kept in sync with another column in the same table.
Let this table be an example:
+------+-----------+
| name | name_copy |
+------+-----------+
| John | John |
+------+-----------+
| Mary | Mary |
+------+-----------+
I'd like to:
Be able to INSERT into this table, using providing a value only for the name column - The name_copy column should automatically take the value I used in name
When UPDATE-ing the name column on a pre-existing row, the name_copy should automatically update to match the new & updated name_column.
Some solutions
I could do this via code but that would be terribly bad as there's no guarantee the data would always be accessible by my code (what if someone changes the data through a DB client?)
What would be a safe and reliable and easy way to tackle this in Postgres?
You can create a trigger. Simple trigger function:
create or replace function trigger_on_example()
returns trigger language plpgsql as $$
begin
new.name_copy := new.name;
return new;
end
$$;
In Postgres 12+ there is a nice alternative in the form of generated columns.
create table my_table(
id int,
name text,
name_copy text generated always as (name) stored);
Note that a generated column cannot be written to directly.
Test both solutions in db<>fiddle.
Don't put name_copy into the table. One method is to create the column and access it using a view:
create view v_table as
select t.*, name as name_copy
from t;
That said, I don't really see a use for this.