When I created a sequence for a table article, it's started from 17 not 1
CREATE SEQUENCE seq_article START WITH 1 INCREMENT BY 1;
CREATE OR REPLACE TRIGGER auto_article BEFORE insert ON article
FOR EACH ROW
BEGIN
SELECT seq_article.NEXTVAL INTO :NEW.id_article FROM dual;
END;
/
I tried to delete all rows and creat other data, this time it's started from 19. How can I fix that?
I'm not sure that I understand the problem.
A sequence generates unique values. Unless you set the sequence to CYCLE and you exceed the MAXVALUE (not realistically possible given the definition you posted) or you manually reset the sequence (say, by setting the INCREMENT BY to -16, fetching a nextval, and then setting the INCREMENT BY back to 1), it won't ever generate a value of 1 a second time. Deleting the data has no impact on the next id_article that will be generated.
A sequence-generated column will have gaps. Whether because the sequence cache gets aged out of the shared pool or because a transaction was rolled back, not every value will end up in the table. If you really need gap-free values, you cannot use a sequence. Of course, that means that you would have to serialize INSERT operations which will massively decrease the scalability of your application.
Related
I'm using Firebird 2.5.9. I have a table of information on a set of hardware impact devices that includes a running counter of the # of times the device has impacted. Each time a device is "fired", the hardware will impact 1 or more times; upon completion of the firing event, that device's row is updated with the timestamp and a result code, and I need to increment the running counter column with the number of impacts for that fire event.
I can do this as a separate query to get the field's current value, increment it and use that new value in the update statement, but that seems like a lot of extra overhead. This sort of scenario can't be that uncommon, so I assume that there's some straightforward way to do this within an update statement, but I don't know what it is. I also realize that I could do this as a stored procedure, but for now I want to just do it in the update statement if possible.
I've done this for now by expanding the existing before-insert trigger to a before-insert-or-update trigger:
CREATE TRIGGER TBIU_RPRS1 FOR RPRS ACTIVE BEFORE INSERT OR UPDATE
AS BEGIN
IF (INSERTING AND NEW.ID IS NULL) THEN NEW.ID = NEXT VALUE FOR SEQ_GLOBAL;
IF (UPDATING) THEN NEW.STRIKES = OLD.STRIKES + NEW.STRIKES;
END;
Running counters, sums, etc used to be called "stored aggregates". They are usually maintained by triggers on events tables. But before using them make sure that a simple view with plain count() is not fast enough for you.
Calling Oracle sequence nextval seems irregular. Our business constraint(unique token) is to generate unique number, so we use oracle sequence. We use hibernate spring boot application. But, in this case, we calling nextval by native query (JDBC), so that hibernate management does not take place. But, after some time running the application in production, out unique token get unique constraint exception, as oracle sequence tries to generate same sequence number or it does not increase sequence value after some time.
In all our unique token, sequence value difference is like 1 or 2. But when sequence value do not increase (upon nextval), manual sequence increment by 2494 (cache 1000 ORDER). Our sequence value is ORDER but stuck in some values, need to increment by 2494 to maintain our uniqueness
Our sequence definition is like this:
CREATE SEQUENCE SEQ_98090 INCREMENT BY 1 CACHE 1000 ORDER NOCYLE NOPARTITION;
To solve this, manually we increase sequence value, so that our unique business constraint (unique token) do not get any unique constraint exception.
Our application runs on multiple servers.
We debug the sequence value from outside, we see that oracle sequence value do not increase.
CREATE SEQUENCE SEQ_98090 INCREMENT BY 1 CACHE 1000 ORDER NOCYLE NOPARTITION;
String queryString = "SELECT " + dbSequence + ".NEXTVAL FROM DUAL";
Query query = em.createNativeQuery(queryString);
BigDecimal issuedTokenBin = null;
try {
issuedTokenBin = (BigDecimal) query.getSingleResult();
} catch (Exception ex) {
log.error("Error :", ex);
}
As we get the sequence from oracle sequence, it should always return incremented value.
From the image: Oracle sequence got stuck at first selected row (app got unique constraint violation). Then we manually increase it and application got going. last 8 digits of the unique token i.e. 79140 and 81634 comes from oracle sequence.
The overall benefit of sequence caching is speed: the higher the caching value, the fewer times that Oracle must go to the database and obtain a cache of sequences. The negative effect of caching is that once a cache of sequences exists, if there is a re-boot of the database, Oracle abandons any values that were in the cache.
Therefore, if you do not want to leave gaps in your sequence values, then you must pay the price by setting your sequence to NOCACHE.
Do you really need caching in your sequence. How about using the following :
CREATE SEQUENCE SEQ_TEST INCREMENT BY 1 NOCACHE ORDER NOCYCLE ;
Regards
Akash
In SAP HANA we use sequences.
However I am not sure what to define for reset by
do I use select max(ID) from tbl or max(ID) + 1 from tbl?
resently we got an unique constrained violation for the ID field.
And the sequence is defined as select max(ID) from tbl
Also is it even better to avoid the option "reset by"?
The common logic for the RESET BY clause is to check the current value (max(ID)) and add an offset (e.g. +1) to avoid a double allocation of a key value.
Not using the option effectively disables the ability to automatically set the current sequence value to a value that will not collide with existing stored values.
To provide some context: usually the sequence number generator uses a cache (even though it's not set up by default) to allow for high-speed consumption of sequence numbers.
In case of a system failure, the numbers in the cache that have not yet been consumed are "lost" in the sense that the database doesn't retain the information which numbers from the cache had been fetched in a recoverable fashion.
By using the RESET BY clause, the "loss" of numbers can be reduced as the sequence gets set back to the last actually used sequence number.
In an working postgres-xl, a table with following sequence
CREATE SEQUENCE public.t1_id_seq
START WITH -1000000
INCREMENT BY -1
MINVALUE -9223372036854775807
NO MAXVALUE
CACHE 1;
after any power outage there is a good chance that insert statement gets this error:
InternalServerError
Can not get current value of the sequence
If explicitly mention an amount for that column it would be inserted. There is also other tables with sequences with positive start point and increment amount that remain intact after the incident.
Any help how to prevent this failure, please?
I created a sequence with beginning value as 2.
create sequence seq_ben
start with 2
increment by 1
nocache
nocycle;
when i was asked to show the next two numbers of the sequence i wrote
select seq_ben.nextval from dual
and ran this code twice to give next two values, then i was asked to show the next sequence without triggering it to move the next number and Use its next three values to add new rows to the the above sequence. Is this possible ? how can it generate a next sequence without triggering it?
You can use CURRVAL, if you have referenced NEXTVAL at least once in the current session.
However, I believe that if you really want to know the next number in the sequence, there is something fundamentally wrong about your design. Sequences are design such that NEXTVAL is an atomic operation, and no two sessions may get the same number. Or an incrementing unique identifier, in other words. That's the only guarantee it gives you. With this design, it is almost meaningless to ask for the next possible value of a sequence.
You may try to use MAX(), which is often used as a poor man's solution to sequences.