How to get max_value and min_value Postgres sequence?
I created the sequence using this statement
create sequence seqtest increment 1 minvalue 0 maxvalue 20;
I tried this query select max_value from seqtest gives error
ERROR: column "max_value" does not exist
LINE 1: select max_value from seqtest;
HINT: Perhaps you meant to reference the column "seqtest.last_value".
Output of select * from seqtest
test=# select * from seqtest;
-[ RECORD 1 ]-
last_value | 0
log_cnt | 0
is_called | f
t=# create sequence seqtest increment 1 minvalue 0 maxvalue 20;
CREATE SEQUENCE
t=# select * from pg_sequence where seqrelid = 'seqtest'::regclass;
seqrelid | seqtypid | seqstart | seqincrement | seqmax | seqmin | seqcache | seqcycle
----------+----------+----------+--------------+--------+--------+----------+----------
16479 | 20 | 0 | 1 | 20 | 0 | 1 | f
(1 row)
Postgres 10 introduced new catalog: https://www.postgresql.org/docs/10/static/catalog-pg-sequence.html
also:
https://www.postgresql.org/docs/current/static/release-10.html
. Move sequences' metadata fields into a new pg_sequence system catalog
(Peter Eisentraut)
A sequence relation now stores only the fields that can be modified by
nextval(), that is last_value, log_cnt, and is_called. Other sequence
properties, such as the starting value and increment, are kept in a
corresponding row of the pg_sequence catalog.
Alternatively, it can be achieved using psql prompt using the command
\d
postgres=# \d seqtest
Sequence "public.seqtest"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------+-----------+---------+-------
bigint | 0 | 0 | 20 | 1 | no | 1
select min_value, max_value from pg_sequences where sequencename = 'seqtest';
https://www.postgresql.org/docs/10/view-pg-sequences.html
But this query shows seqmax=9223372036854775807 (2^63-1) for any kind of sequence somehow.
Related
I have a table of encounters called user_dates that is ordered by 'user' and 'start' like below. I want to create a column indicating whether an encounter was followed up by another encounter within 30 days. So basically I want to go row by row checking if "encounter_stop" is within 30 days of "encounter_start" in the following row (as long as the following row is the same user).
user | encounter_start | encounter_stop
A | 4-16-1989 | 4-20-1989
A | 4-24-1989 | 5-1-1989
A | 6-14-1993 | 6-27-1993
A | 12-24-1999 | 1-2-2000
A | 1-19-2000 | 1-24-2000
B | 2-2-2000 | 2-7-2000
B | 5-27-2001 | 6-4-2001
I want a table like this:
user | encounter_start | encounter_stop | subsequent_encounter_within_30_days
A | 4-16-1989 | 4-20-1989 | 1
A | 4-24-1989 | 5-1-1989 | 0
A | 6-14-1993 | 6-27-1993 | 0
A | 12-24-1999 | 1-2-2000 | 1
A | 1-19-2000 | 1-24-2000 | 0
B | 2-2-2000 | 2-7-2000 | 1
B | 5-27-2001 | 6-4-2001 | 0
You can select..., exists <select ... criteria>, that would return a boolean (always true or false) but if really want 1 or 0 just cast the result to integer: true=>1 and false=>0. See Demo
select ts1.user_id
, ts1.encounter_start
, ts1. encounter_stop
, (exists ( select null
from test_set ts2
where ts1.user_id = ts2.user_id
and ts2.encounter_start
between ts1.encounter_stop
and (ts1.encounter_stop + interval '30 days')::date
)::integer
) subsequent_encounter_within_30_days
from test_set ts1
order by user_id, encounter_start;
Difference: The above (and demo) disagree with your expected result:
B | 2-2-2000 | 2-7-2000| 1
subsequent_encounter (last column) should be 0. This entry starts and ends in Feb 2000, the other B entry starts In May 2001. Please explain how these are within 30 days (other than just a simple typo that is).
Caution: Do not use user as a column name. It is both a Postgres and SQL Standard reserved word. You can sometimes get away with it or double quote it. If you double quote it you MUST always do so. The big problem being it has a predefined meaning (run select user;) and if you forget to double quote is does not necessary produce an error or exception; it is much worse - wrong results.
I have a table called diary which includes columns listed below:
| id | user_id | custom_foods |
|----|---------|--------------------|
| 1 | 1 | {"56": 2, "42": 0} |
| 2 | 1 | {"19861": 1} |
| 3 | 2 | {} |
| 4 | 3 | {"331": 0} |
I would like to count how many diaries having custom_foods value(s) larger than 0 each user have. I don't care about the keys, since the keys can be any number in string.
The desired output is:
| user_id | count |
|---------|---------|
| 1 | 2 |
| 2 | 0 |
| 3 | 0 |
I started with:
select *
from diary as d
join json_each_text(d.custom_foods) as e
on d.custom_foods != '{}'
where e.value > 0
I don't even know whether the syntax is correct. Now I am getting the error:
ERROR: function json_each_text(text) does not exist
LINE 3: join json_each_text(d.custom_foods) as e
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
My using version is: psql (10.5 (Ubuntu 10.5-1.pgdg14.04+1), server 9.4.19). According to PostgreSQL 9.4.19 Documentation, that function should exist. I am so confused that I don't know how to proceed now.
Threads that I referred to:
Postgres and jsonb - search value at any key
Query postgres jsonb by value regardless of keys
Your custom_foods column is defined as text, so you should cast it to json before applying json_each_text. As json_each_text by default does not consider empty jsons, you may get the count as 0 for empty jsons from a separate CTE and do a UNION ALL
WITH empty AS
( SELECT DISTINCT user_id,
0 AS COUNT
FROM diary
WHERE custom_foods = '{}' )
SELECT user_id,
count(CASE
WHEN VALUE::int > 0 THEN 1
END)
FROM diary d,
json_each_text(d.custom_foods::JSON)
GROUP BY user_id
UNION ALL
SELECT *
FROM empty
ORDER BY user_id;
Demo
I have event data that looks like this:
id | instance_id | value
1 | 1 | a
2 | 1 | ap
3 | 1 | app
4 | 1 | appl
5 | 2 | b
6 | 2 | bo
7 | 1 | apple
8 | 2 | boa
9 | 2 | boat
10 | 2 | boa
11 | 1 | appl
12 | 1 | apply
Basically, each row is a user typing a new letter. They can also delete letters.
I'd like to create a dataset that looks like this, let's call it data
id | instance_id | value
7 | 1 | apple
9 | 2 | boat
12 | 1 | apply
My goal is to extract all the complete words in each instance, accounting for deletion as well - so it's not sufficient to just get the longest word or the most recently typed.
To do so, I was planning to do a regex operation like so:
select * from data
where not exists (select * from data d2 where d2.value ~ (d.value || '.'))
Effectively I'm trying to build a dynamic regex that adds matches one character more than is present, and is specific to the row it's matching against.
The code above doesn't seem to work. In Python, I can "compile" a regex pattern before I use it. What is the equivalent in PostgreSQL to dynamically build a pattern?
Try simple LIKE operator instead of regex patterns:
SELECT * FROM data d1
WHERE NOT EXISTS (
SELECT * FROM data d2
WHERE d2.value LIKE d1.value ||'_%'
)
Demo: https://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=cd064c92565639576ff456dbe0cd5f39
Create an index on value column, this should speed up the query a bit.
To find peaks in the sequential data window functions is a good choice. You just need to compare each value with previous and next ones using lag() and lead() functions:
with cte as (
select
*,
length(value) > coalesce(length(lead(value) over (partition by instance_id order by id)),0) and
length(value) > coalesce(length(lag(value) over (partition by instance_id order by id)),length(value)) as is_peak
from data)
select * from cte where is_peak order by id;
Demo
Information about sequences can be found in the _vt_sequence view which can be joined with _v_sequence on _v_sequence.objid = _vt_sequence.seq_id.
sequence fetch query :
select vs.* , vts.*
from _v_sequence vs join _vt_sequence vts on vs.objid = vts.seq_id;
Following values generated out of queries :
OBJID
SEQNAME
OWNER
CREATEDATE
OBJTYPE
OBJCLASS
OBJDELIM
DATABASE
OBJDB
SCHEMA
SCHEMAID
SEQ_ID
DB_ID
DATA_TYPE
MIN_VALUE
MAX_VALUE
CYCLE
INCREMENT
CACHE_SIZE
NEXT_CACHE_VAL
FLAGS
Sample CREATE SEQUNCE:
CREATE SEQUENCE TEMP_PPC_SEQ AS BIGINT
START WITH 1
INCREMENT BY 2
NO MINVALUE
MAXVALUE 2147483647
NO CYCLE;
There is no value for START of sequence.
Help to fetch START value of each sequence.
You can find start value in sequence table. Unfortunately netezza don't update last value in this table. Your answer is
SELECT s.LAST_VALUE
FROM sequence1 s;
You can find the "START WITH" value used by the DDL at creation time by selecting from the sequence object itself, as if it were a table, and referencing the LAST_VALUE column.
Don't be confused by the column name. LAST_VALUE does not report the last value dispensed by the sequence. The below example demonstrates this.
TESTDB.ADMIN(ADMIN)=> create sequence test_seq_1 as bigint start with 12345;
CREATE SEQUENCE
TESTDB.ADMIN(ADMIN)=> create temp table seq_out as select next value for test_seq_1 from _v_vector_idx a cross join _v_vector_idx b;
INSERT 0 1048576
TESTDB.ADMIN(ADMIN)=> select * from test_seq_1;
SEQUENCE_NAME | LAST_VALUE | INCREMENT_BY | MAX_VALUE | MIN_VALUE | CACHE_VALUE | LOG_CNT | IS_CYCLED | IS_CALLED | DATATYPE
---------------+------------+--------------+---------------------+-----------+-------------+---------+-----------+-----------+----------
TEST_SEQ_1 | 12345 | 1 | 9223372036854775807 | 1 | 100000 | 1 | f | f | 20
(1 row)
TESTDB.ADMIN(ADMIN)=> select next value for test_seq_1;
NEXTVAL
---------
1060921
(1 row)
I need to create a view that automatically adds virtual row number in the result. the graph here is totally random all that I want to achieve is the last column to be created dynamically.
> +--------+------------+-----+
> | id | variety | num |
> +--------+------------+-----+
> | 234 | fuji | 1 |
> | 4356 | gala | 2 |
> | 343245 | limbertwig | 3 |
> | 224 | bing | 4 |
> | 4545 | chelan | 5 |
> | 3455 | navel | 6 |
> | 4534345| valencia | 7 |
> | 3451 | bartlett | 8 |
> | 3452 | bradford | 9 |
> +--------+------------+-----+
Query:
SELECT id,
variety,
SOMEFUNCTIONTHATWOULDGENERATETHIS() AS num
FROM mytable
Use:
SELECT t.id,
t.variety,
(SELECT COUNT(*) FROM TABLE WHERE id < t.id) +1 AS NUM
FROM TABLE t
It's not an ideal manner of doing this, because the query for the num value will execute for every row returned. A better idea would be to create a NUMBERS table, with a single column containing a number starting at one that increments to an outrageously large number, and then join & reference the NUMBERS table in a manner similar to the variable example that follows.
MySQL Ranking, or Lack Thereof
You can define a variable in order to get psuedo row number functionality, because MySQL doesn't have any ranking functions:
SELECT t.id,
t.variety,
#rownum := #rownum + 1 AS num
FROM TABLE t,
(SELECT #rownum := 0) r
The SELECT #rownum := 0 defines the variable, and sets it to zero.
The r is a subquery/table alias, because you'll get an error in MySQL if you don't define an alias for a subquery, even if you don't use it.
Can't Use A Variable in a MySQL View
If you do, you'll get the 1351 error, because you can't use a variable in a view due to design. The bug/feature behavior is documented here.
Oracle has a rowid pseudo-column. In MySQL, you might have to go ugly:
SELECT id,
variety,
1 + (SELECT COUNT(*) FROM tbl WHERE t.id < id) as num
FROM tbl
This query is off the top of my head and untested, so take it with a grain of salt. Also, it assumes that you want to number the rows according to some sort criteria (id in this case), rather than the arbitrary numbering shown in the question.