it's possible to use sequences in mariaDB Server? - sql

A sequence is an alternative to create for example the Primary Key on every record in your tables.
So this is the common syntax I'm using actually
CREATE TABLE users(
id BIGINT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(100) UNIQUE NOT NULL
);
As you can see the common solution I implemented is the PRIMARY KEY clausule
But is RDBMS such as Oracle or PostgreSQL is possible to use sequences for replacing the previous code
For example in Oracle you declare
CREATE SEQUENCE id
START WITH 1
INCREMENT BY 1;
Or in PostgreSQL in this way
CREATE SEQUENCE id
START WITH 1
INCREMENT BY 1;
But this is possible in mariaDB Server?

In MariaDB since version 10.3 there is the possibility of natively using the creation of sequences; through the following example you can see how it is achieved
First let's create the table sequence
CREATE SEQUENCE id
START WITH 1
INCREASE BY 1;
If for example we want to see the structure of the table id, newly created; just execute the following in the console
MariaDB [blog]> describe id;
+-----------------------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------------------+---------------------+------+-----+---------+-------+
| next_not_cached_value | bigint(21) | NO | | NULL | |
| minimum_value | bigint(21) | NO | | NULL | |
| maximum_value | bigint(21) | NO | | NULL | |
| start_value | bigint(21) | NO | | NULL | |
| increment | bigint(21) | NO | | NULL | |
| cache_size | bigint(21) unsigned | NO | | NULL | |
| cycle_option | tinyint(1) unsigned | NO | | NULL | |
| cycle_count | bigint(21) | NO | | NULL | |
+-----------------------+---------------------+------+-----+---------+-------+
From the above table you can notice important details such as the fact that the default numeric value is of type BIGINT
So that as we said that it is going to start in the numbering in 1 and in one more is going to increase, it allows us to generate the progressive number that can be associated, like a primary to a table
Let's create a new table of examples in the mariaDB manager
MariaDB [blog]> CREATE TABLE demo (
-> id BIGINT NOT NULL,
-> name VARCHAR (30),
-> PRIMARY KEY (id));
Finally, when declaring a primary key of autoincrement type it is not necessary to declare it in the insert sentence, when we use a sequence if it is necessary to write the name of the column; as the following example
MariaDB [blog]> INSERT INTO demo (id, name)
-> VALUES
-> (NEXT VALUE FOR id, 'alpha');
How can the previous sequence be observed, to insert the dynamic value generated by the sequence, we invoke the name of the sequence through NEXT VALUE for and at the end the name of the sequence id
Finally, you can obtain the result of our previous sentence, we execute a SELECT regularly on the table and we obtain the following
MariaDB [blog]> SELECT * FROM demo;
+ ---- + ------ +
| id | name |
+ ---- + ------ +
| 1 | alpha |
+ ---- + ------ +
Extra configurations:
Optionally, you can configure the following parameters to a sequence within the mariaDB manager:
minvalue = You can set it 1
maxvalue = Depending on the type of data you choose if it is INT or BIGINT you must check to place a cap that respects the limits of
those data types
Cycle = By default it has the option no cycle, otherwise, once the minimum value starts and the maximum limit is reached, the counter
is restarted and the numbering begins again (provided that the limit
of the data type) possible )

In a word - yes. That feature is available since version 10.3.
See the documentation for the full details.

Related

MariaDB - insert record or update timestamp if record exists

Objective: cronjob runs a task; when completed successfully, insert new host record. If record exists, update timestamp to reflect this status.
# Table layout
> describe hosts_completed;
+-----------+---------------------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+---------------------+------+-----+-------------------+-----------------------------+
| id | bigint(20) unsigned | NO | PRI | NULL | auto_increment |
| timestamp | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| hostname | varchar(32) | YES | MUL | NULL | |
+-----------+---------------------+------+-----+-------------------+-----------------------------+
# Current inventory
> select * from hosts_completed;
+----+---------------------+----------+
| id | timestamp | hostname |
+----+---------------------+----------+
| 10 | 2020-11-02 12:51:08 | myHost1 |
| 11 | 2020-11-02 14:32:16 | MyHost2 |
+----+---------------------+----------+
I want to update the status for myHost1 and my best shot would be like
> insert into hosts_completed(hostname) values("myHost1") ON DUPLICATE KEY UPDATE timestamp=now();
and it runs but adds a new record, it does not update the myHost1 record.
Where is the glitch?
The on duplicate key syntax requires a unique constraint on the column that is used to detect the conflict. Create it first:
alter table hosts_completed
add constraint unique_hostname
unique (hostname);
Note that this pre-requires no duplicates in the column (otherwise you need to housekeep your data before you can create the constraint).
Then you can use your curent query:
insert into hosts_completed(hostname)
values('myHost1')
on duplicate key update timestamp = now();

INSERT or UPDATE tables with SERIAL primary key

Simple question,
How can I UPSERT in PostgreSQL tables, that have SERIAL (also knowns as auto increment) primary keys?
I couldn't find really believable source this is why I ask here.
This is what I got so far but It doesn't work.
INSERT INTO public.friendship (Username_1, Username_2, Status)
VALUES ("User1", "User2", "Accepted")
ON CONFLICT (Username_1, Username_2)
DO UPDATE SET (Status) = ("Accepted")
WHERE Username_1 = "User1" AND Username_2 = "User2";
Full schema (ER diagram). Table is "Friendship" but UMLStar exports weird SQL with "public.Friendship" like table names. But that doesn't matter, it works with other cases (I got Register/Login working successfully).
Command \d, List of relations
Schema | Name | Type | Owner
--------+---------------------------------+----------+----------
public | activitydata | table | postgres
public | activitydata_activitydataid_seq | sequence | postgres
public | chatroom | table | postgres
public | chatroom_chatroomid_seq | sequence | postgres
public | country | table | postgres
public | friendship | table | postgres
public | friendship_friendshipid_seq | sequence | postgres
public | message | table | postgres
public | message_messageid_seq | sequence | postgres
public | participates | table | postgres
public | participates_participatesid_seq | sequence | postgres
public | user | table | postgres
(12 rows)
messenger=# \d Friendship
Table "public.friendship"
Column | Type | Modifiers
--------------+---------+-------------------------------------------------------------------
friendshipid | integer | not null default nextval('friendship_friendshipid_seq'::regclass)
username_1 | text | not null
username_2 | text | not null
status | text | not null
Indexes:
"friendship_pkey" PRIMARY KEY, btree (friendshipid)
"friendship_username_1_idx" btree (username_1)
"friendship_username_2_idx" btree (username_2)
Executed command:
messenger=# INSERT INTO public.friendship (Username_1, Username_2, Status)
messenger-# VALUES ('User1', 'User2', 'Accepted')
messenger-# ON CONFLICT (Username_1, Username_2)
messenger-# DO UPDATE SET (Status) = ('Accepted')
messenger-# WHERE Username_1 = 'User1' AND Username_2 = 'User2';
ERROR: column reference "username_1" is ambiguous
LINE 5: WHERE Username_1 = 'User1' AND Username_2 = 'User2';
Table "public.friendship"
Column | Type | Modifiers
--------------+---------+-------------------------------------------------------------------
friendshipid | integer | not null default nextval('friendship_friendshipid_seq'::regclass)
username_1 | text | not null
username_2 | text | not null
status | text | not null
Indexes:
"friendship_pkey" PRIMARY KEY, btree (friendshipid)
"friendship_username_1_idx" btree (username_1)
"friendship_username_2_idx" btree (username_2)
This doesn't show a unique key on (username_1, username_2)
What you should do is
BEGIN;
DROP INDEX friendship_username_1_idx,
CREATE UNIQUE INDEX ON foo(username_1, username_2);
COMMIT;
If you never query on username_2 without also querying on username_1 then you should drop friendship_username_2_idx and let the one compound UNIQUE index work for you.
For reference we wanted \d Friendship because if you had a unique index it would show...
Table "public.foo"
Column | Type | Modifiers
--------+---------+-----------
a | integer |
b | integer |
Indexes:
"foo_a_b_idx" UNIQUE, btree (a, b)
Without something there, how can you get a conflict?
Sorry, if I have missed the complexity of the question but this seems simpler:
ALTER TABLE public.friendship
add column id serial;

postgresql update column with least value of column from another table based on condition

I'm trying to run an update query on the column answer_date of a table P. I want to fill each row of answer_date of P with the unique date from create_date column of H where P.ID1 matches with H.ID1 and where P.acceptance_date is not empty.
The query takes a long while to run, so I check the interim changes in answer_date but the entire column is empty like it was created.
Btree indices exists on all the mentioned columns.
Is there something wrong with the query?
UPDATE P
SET answer_date = subquery.date
FROM (SELECT DISTINCT H.create_date as date
FROM H, P
where H.postid=P.acceptance_id
) AS subquery
WHERE P.acceptance_id is not null;
Table schema is as follows:
Table "public.P"
Column | Type | Modifiers | Storage | Stats target | Description
-----------------------+-----------------------------+-----------+----------+--------------+-------------
id | integer | not null | plain | |
acceptance_id | integer | | plain | |
answer_date | timestamp without time zone | | plain | |
Indexes:
"posts_pkey" PRIMARY KEY, btree (id)
"posts_accepted_answer_id_idx" btree (acceptance_id) WITH (fillfactor='100')
and
Table "public.H"
Column | Type | Modifiers | Storage | Stats target | Description
-------------------+-----------------------------+-----------+----------+--------------+-------------
id | integer | not null | plain | |
postid | integer | | plain | |
create_date | timestamp without time zone | not null | plain | |
Indexes:
"H_pkey" PRIMARY KEY, btree (id)
"ph_creation_date_idx" btree (create_date) WITH (fillfactor='100')
Table P as 70 million rows and H has 220 million rows.
Postgres version is 9.6
Hardware is a Windows laptop with 8Gb of RAM.

Setting unique or check constraint to make a field unique in each entry of another unique field

I have a table as follows :
.-----------------------------.
| mytable |
-------------------------------
| id | primary key |
| field | varchar |
| name | varchar |
-------------------------------
id and field will be unique. But I want to make name also unique for each field. e.g.
id | field | name
------------------------------------------------------
1 | f1 | n1
2 | f1 | n2
3 | f1 | n1 < Should be voilated
4 | f2 | n1 < Should be fine
Just add a unique index or constraint:
create unique index idx_mytable_name_field on mytable(name, field)
Note: this solution will not work if you have NULL values in the column and you want the constraint to apply to them as well. By default Postgres ignores NULL values.

Are there problems with this 'Soft Delete' solution using EAV tables?

I've read some information about the ugly side of just setting a deleted_at field in your tables to signify a row has been deleted.
Namely
http://richarddingwall.name/2009/11/20/the-trouble-with-soft-delete/
Are there any potential problems with taking a row from a table you want to delete and pivoting it into some EAV tables?
For instance.
Lets Say I have two tables deleted and deleted_row respectively described as follows.
mysql> describe deleted;
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| tablename | varchar(255) | YES | | NULL | |
| deleted_at | timestamp | YES | | NULL | |
+------------+--------------+------+-----+---------+----------------+
mysql> describe deleted_rows;
+--------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| entity | int(11) | YES | MUL | NULL | |
| name | varchar(255) | YES | | NULL | |
| value | blob | YES | | NULL | |
+--------+--------------+------+-----+---------+----------------+
Now when you wanted to delete a row from any table you would delete it from the table then insert it into these tables as such.
deleted
+----+-----------+---------------------+
| id | tablename | deleted_at |
+----+-----------+---------------------+
| 1 | products | 2011-03-23 00:00:00 |
+----+-----------+---------------------+
deleted_row
+----+--------+-------------+-------------------------------+
| id | entity | name | value |
+----+--------+-------------+-------------------------------+
| 1 | 1 | Title | A Great Product |
| 2 | 1 | Price | 55.00 |
| 3 | 1 | Description | You guessed it... it's great. |
+----+--------+-------------+-------------------------------+
A few things I see off the bat.
You'll need to use application logic
to do the pivot (Ruby, PHP, Python,
etc)
The table could grow pretty big
because I'm using blob to handle
the unknown size of the row value
Do you see any other glaring problems with this type of soft delete?
Why not mirror your tables with archive tables?
create table mytable(
col_1 int
,col_2 varchar(100)
,col_3 date
,primary key(col_1)
)
create table mytable_deleted(
delete_id int not null auto_increment
,delete_dtm datetime not null
-- All of the original columns
,col_1 int
,col_2 varchar(100)
,col_3 date
,index(col_1)
,primary key(delete_id)
)
And then simply add on-delete-triggers on your tables that inserts the current row in the mirrored table before the deletion? That would provide you with dead-simple and very performant solution.
You could actually generate the tables and trigger code using the data dictionary.
Note that I might not want to have a unique index on the original primary key (col_1) in the archive table, because you may actually end up deleting the same row twice over time if you are using natural keys. Unless you plan to hook up the archive tables in your application (for undo purposes) you can drop the index entirely. Also, I added the time of delete (deleted_dtm) and a surrogate key that can be used to delete the deleted (hehe) rows.
You may also consider range partitioning the archive table on deleted_dtm. This makes it pretty much effortless to purge data from the tables.