Calling Oracle sequence nextval seems irregular. Our business constraint(unique token) is to generate unique number, so we use oracle sequence. We use hibernate spring boot application. But, in this case, we calling nextval by native query (JDBC), so that hibernate management does not take place. But, after some time running the application in production, out unique token get unique constraint exception, as oracle sequence tries to generate same sequence number or it does not increase sequence value after some time.
In all our unique token, sequence value difference is like 1 or 2. But when sequence value do not increase (upon nextval), manual sequence increment by 2494 (cache 1000 ORDER). Our sequence value is ORDER but stuck in some values, need to increment by 2494 to maintain our uniqueness
Our sequence definition is like this:
CREATE SEQUENCE SEQ_98090 INCREMENT BY 1 CACHE 1000 ORDER NOCYLE NOPARTITION;
To solve this, manually we increase sequence value, so that our unique business constraint (unique token) do not get any unique constraint exception.
Our application runs on multiple servers.
We debug the sequence value from outside, we see that oracle sequence value do not increase.
CREATE SEQUENCE SEQ_98090 INCREMENT BY 1 CACHE 1000 ORDER NOCYLE NOPARTITION;
String queryString = "SELECT " + dbSequence + ".NEXTVAL FROM DUAL";
Query query = em.createNativeQuery(queryString);
BigDecimal issuedTokenBin = null;
try {
issuedTokenBin = (BigDecimal) query.getSingleResult();
} catch (Exception ex) {
log.error("Error :", ex);
}
As we get the sequence from oracle sequence, it should always return incremented value.
From the image: Oracle sequence got stuck at first selected row (app got unique constraint violation). Then we manually increase it and application got going. last 8 digits of the unique token i.e. 79140 and 81634 comes from oracle sequence.
The overall benefit of sequence caching is speed: the higher the caching value, the fewer times that Oracle must go to the database and obtain a cache of sequences. The negative effect of caching is that once a cache of sequences exists, if there is a re-boot of the database, Oracle abandons any values that were in the cache.
Therefore, if you do not want to leave gaps in your sequence values, then you must pay the price by setting your sequence to NOCACHE.
Do you really need caching in your sequence. How about using the following :
CREATE SEQUENCE SEQ_TEST INCREMENT BY 1 NOCACHE ORDER NOCYCLE ;
Regards
Akash
Related
In SAP HANA we use sequences.
However I am not sure what to define for reset by
do I use select max(ID) from tbl or max(ID) + 1 from tbl?
resently we got an unique constrained violation for the ID field.
And the sequence is defined as select max(ID) from tbl
Also is it even better to avoid the option "reset by"?
The common logic for the RESET BY clause is to check the current value (max(ID)) and add an offset (e.g. +1) to avoid a double allocation of a key value.
Not using the option effectively disables the ability to automatically set the current sequence value to a value that will not collide with existing stored values.
To provide some context: usually the sequence number generator uses a cache (even though it's not set up by default) to allow for high-speed consumption of sequence numbers.
In case of a system failure, the numbers in the cache that have not yet been consumed are "lost" in the sense that the database doesn't retain the information which numbers from the cache had been fetched in a recoverable fashion.
By using the RESET BY clause, the "loss" of numbers can be reduced as the sequence gets set back to the last actually used sequence number.
In an working postgres-xl, a table with following sequence
CREATE SEQUENCE public.t1_id_seq
START WITH -1000000
INCREMENT BY -1
MINVALUE -9223372036854775807
NO MAXVALUE
CACHE 1;
after any power outage there is a good chance that insert statement gets this error:
InternalServerError
Can not get current value of the sequence
If explicitly mention an amount for that column it would be inserted. There is also other tables with sequences with positive start point and increment amount that remain intact after the incident.
Any help how to prevent this failure, please?
One of my mappings is running for a really long time (2 hours).From the session log i can see the statment "Time out based commit poin" which is tking most of the time and Busy percentage for the SQL tranfsormation is very high(which is taking time,I ran the SQL query manually in DB,its working fine ).So, basically there is a router which splits the record between insert and update.And the update stream is taking long.It has a SQL transforamtion,Update statrtergy and aggregator.I added an sorter before aggregator but no luck.
Also changed comit interval ,Lins Sequential Buffer lenght and Maximum memory allowed by checking some of the other blogs.Could you please help me with this.
If possible try to avoid the transformations which are creating cache because in future if the input records increase. Cache size will also increase and decrease the throughput
1) Aggregator : Try to use the Aggregation in SQL override itself
2) Sorter : Try to do the same in the SQL Override itself
Generally SQL transformation is slow for huge data loads, because for each input record an SQL session is invoked and a connection is established to database and the row is fetched. Say for example there are 1 million records, 1 million SQL sessions are initiated in the backend and the database is called.
What the SQL transformation doing ? Is it just generating a Surrogate key or its fetching a value from a table based on derived value from the stream
For fetching a value from a table based on derived value from the stream:
Try to use lookup
For generating Surrogate key, Use Oracle Sequence instead
Let me know if its purpose is any thing other than that
Also do the below checks
Sort the session log on thread and just make a note of start and end times of
the following
1) lookup caches creation (time between Query issued --> First row returned --> Cache creation completed)
2) Reader thread first row return time
Regards,
Raj
I created a sequence with beginning value as 2.
create sequence seq_ben
start with 2
increment by 1
nocache
nocycle;
when i was asked to show the next two numbers of the sequence i wrote
select seq_ben.nextval from dual
and ran this code twice to give next two values, then i was asked to show the next sequence without triggering it to move the next number and Use its next three values to add new rows to the the above sequence. Is this possible ? how can it generate a next sequence without triggering it?
You can use CURRVAL, if you have referenced NEXTVAL at least once in the current session.
However, I believe that if you really want to know the next number in the sequence, there is something fundamentally wrong about your design. Sequences are design such that NEXTVAL is an atomic operation, and no two sessions may get the same number. Or an incrementing unique identifier, in other words. That's the only guarantee it gives you. With this design, it is almost meaningless to ask for the next possible value of a sequence.
You may try to use MAX(), which is often used as a poor man's solution to sequences.
When I created a sequence for a table article, it's started from 17 not 1
CREATE SEQUENCE seq_article START WITH 1 INCREMENT BY 1;
CREATE OR REPLACE TRIGGER auto_article BEFORE insert ON article
FOR EACH ROW
BEGIN
SELECT seq_article.NEXTVAL INTO :NEW.id_article FROM dual;
END;
/
I tried to delete all rows and creat other data, this time it's started from 19. How can I fix that?
I'm not sure that I understand the problem.
A sequence generates unique values. Unless you set the sequence to CYCLE and you exceed the MAXVALUE (not realistically possible given the definition you posted) or you manually reset the sequence (say, by setting the INCREMENT BY to -16, fetching a nextval, and then setting the INCREMENT BY back to 1), it won't ever generate a value of 1 a second time. Deleting the data has no impact on the next id_article that will be generated.
A sequence-generated column will have gaps. Whether because the sequence cache gets aged out of the shared pool or because a transaction was rolled back, not every value will end up in the table. If you really need gap-free values, you cannot use a sequence. Of course, that means that you would have to serialize INSERT operations which will massively decrease the scalability of your application.