In an working postgres-xl, a table with following sequence
CREATE SEQUENCE public.t1_id_seq
START WITH -1000000
INCREMENT BY -1
MINVALUE -9223372036854775807
NO MAXVALUE
CACHE 1;
after any power outage there is a good chance that insert statement gets this error:
InternalServerError
Can not get current value of the sequence
If explicitly mention an amount for that column it would be inserted. There is also other tables with sequences with positive start point and increment amount that remain intact after the incident.
Any help how to prevent this failure, please?
Related
Calling Oracle sequence nextval seems irregular. Our business constraint(unique token) is to generate unique number, so we use oracle sequence. We use hibernate spring boot application. But, in this case, we calling nextval by native query (JDBC), so that hibernate management does not take place. But, after some time running the application in production, out unique token get unique constraint exception, as oracle sequence tries to generate same sequence number or it does not increase sequence value after some time.
In all our unique token, sequence value difference is like 1 or 2. But when sequence value do not increase (upon nextval), manual sequence increment by 2494 (cache 1000 ORDER). Our sequence value is ORDER but stuck in some values, need to increment by 2494 to maintain our uniqueness
Our sequence definition is like this:
CREATE SEQUENCE SEQ_98090 INCREMENT BY 1 CACHE 1000 ORDER NOCYLE NOPARTITION;
To solve this, manually we increase sequence value, so that our unique business constraint (unique token) do not get any unique constraint exception.
Our application runs on multiple servers.
We debug the sequence value from outside, we see that oracle sequence value do not increase.
CREATE SEQUENCE SEQ_98090 INCREMENT BY 1 CACHE 1000 ORDER NOCYLE NOPARTITION;
String queryString = "SELECT " + dbSequence + ".NEXTVAL FROM DUAL";
Query query = em.createNativeQuery(queryString);
BigDecimal issuedTokenBin = null;
try {
issuedTokenBin = (BigDecimal) query.getSingleResult();
} catch (Exception ex) {
log.error("Error :", ex);
}
As we get the sequence from oracle sequence, it should always return incremented value.
From the image: Oracle sequence got stuck at first selected row (app got unique constraint violation). Then we manually increase it and application got going. last 8 digits of the unique token i.e. 79140 and 81634 comes from oracle sequence.
The overall benefit of sequence caching is speed: the higher the caching value, the fewer times that Oracle must go to the database and obtain a cache of sequences. The negative effect of caching is that once a cache of sequences exists, if there is a re-boot of the database, Oracle abandons any values that were in the cache.
Therefore, if you do not want to leave gaps in your sequence values, then you must pay the price by setting your sequence to NOCACHE.
Do you really need caching in your sequence. How about using the following :
CREATE SEQUENCE SEQ_TEST INCREMENT BY 1 NOCACHE ORDER NOCYCLE ;
Regards
Akash
In SAP HANA we use sequences.
However I am not sure what to define for reset by
do I use select max(ID) from tbl or max(ID) + 1 from tbl?
resently we got an unique constrained violation for the ID field.
And the sequence is defined as select max(ID) from tbl
Also is it even better to avoid the option "reset by"?
The common logic for the RESET BY clause is to check the current value (max(ID)) and add an offset (e.g. +1) to avoid a double allocation of a key value.
Not using the option effectively disables the ability to automatically set the current sequence value to a value that will not collide with existing stored values.
To provide some context: usually the sequence number generator uses a cache (even though it's not set up by default) to allow for high-speed consumption of sequence numbers.
In case of a system failure, the numbers in the cache that have not yet been consumed are "lost" in the sense that the database doesn't retain the information which numbers from the cache had been fetched in a recoverable fashion.
By using the RESET BY clause, the "loss" of numbers can be reduced as the sequence gets set back to the last actually used sequence number.
I created a sequence with beginning value as 2.
create sequence seq_ben
start with 2
increment by 1
nocache
nocycle;
when i was asked to show the next two numbers of the sequence i wrote
select seq_ben.nextval from dual
and ran this code twice to give next two values, then i was asked to show the next sequence without triggering it to move the next number and Use its next three values to add new rows to the the above sequence. Is this possible ? how can it generate a next sequence without triggering it?
You can use CURRVAL, if you have referenced NEXTVAL at least once in the current session.
However, I believe that if you really want to know the next number in the sequence, there is something fundamentally wrong about your design. Sequences are design such that NEXTVAL is an atomic operation, and no two sessions may get the same number. Or an incrementing unique identifier, in other words. That's the only guarantee it gives you. With this design, it is almost meaningless to ask for the next possible value of a sequence.
You may try to use MAX(), which is often used as a poor man's solution to sequences.
When I created a sequence for a table article, it's started from 17 not 1
CREATE SEQUENCE seq_article START WITH 1 INCREMENT BY 1;
CREATE OR REPLACE TRIGGER auto_article BEFORE insert ON article
FOR EACH ROW
BEGIN
SELECT seq_article.NEXTVAL INTO :NEW.id_article FROM dual;
END;
/
I tried to delete all rows and creat other data, this time it's started from 19. How can I fix that?
I'm not sure that I understand the problem.
A sequence generates unique values. Unless you set the sequence to CYCLE and you exceed the MAXVALUE (not realistically possible given the definition you posted) or you manually reset the sequence (say, by setting the INCREMENT BY to -16, fetching a nextval, and then setting the INCREMENT BY back to 1), it won't ever generate a value of 1 a second time. Deleting the data has no impact on the next id_article that will be generated.
A sequence-generated column will have gaps. Whether because the sequence cache gets aged out of the shared pool or because a transaction was rolled back, not every value will end up in the table. If you really need gap-free values, you cannot use a sequence. Of course, that means that you would have to serialize INSERT operations which will massively decrease the scalability of your application.
What happen when SQL Server 2005 happen to reach the maximum for an IDENTITY column? Does it start from the beginning and start refilling the gap?
What is the behavior of SQL Server 2005 when it happen?
You will get an overflow error when the maximum value is reached. If you use the bigint datatype with a maximum value of 9,223,372,036,854,775,807 this will most likely never be the case.
The error message you will get, will look like this:
Msg 220, Level 16, State 2, Line 10
Arithmetic overflow error for data type tinyint, value = 256.
(Source)
As far as I know MS SQL provides no functionality to fill the identity gaps, so you will either have to do this by yourself or change the datatype of the identity column.
In addition to this you can set the start value to the smallest negative number, to get an even bigger range of values to use.
Here is a good blog post about this topic.
It will not fill in the gaps. Instead inserts will fail until you change the definition of the column to either drop the identity and find some other way of filling in the gaps or increase the size (go from int to bigint) or change the type of the data (from int to decimal) so that more identity values are available.
You will be unable to insert new rows and will receive the error message listed above until you fix the problem. You can do this a number of ways. If you still have data and are using all the id's below the max, you will have to change the datatype. If the data is getting purged on a regular basis and you have a large gap that is not going to be used, you can reseed the identity number to the lowest number in that gap. For example,at a previous job,we were logging transactions. We had maybe 40-50 million per month, but we were purging everything older than 6 months, so every few years, the identity would get close to 2 Billion, but we would have nothing with an id below 1.5 billion, so we would reseed back to 0. Again it's possible that neither of these will work for you and you will have to find a different solution.
If the identity column is an Integer, then your max is 2,147,483,647. You will get an overflow error if you exceed it.
If you think this is a risk, just use the BIGINT datatype, which gives you up to 9,223,372,036,854,775,807. Can't imagine a database table with that many rows.
Further discussion here. (Same link as xsl mentioned).
In the event that you do hit the maximum number for you identity column, you can move the data from that table into a secondary table with a bigger identity column type and specify the starting value for that new identity value to be the maximum of the previous type. The new identity values will continue from that point.
If you delete "old values" from time to time you just need to reset the seed using
DBCC CHECKIDENT ('MyTable', RESEED, 0);