Can I specify the initial value for an IDENTITY column using Hibernate/JPA annotations on top of an HSQLDB database? The relevant source code looks like this so far:
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
public Long getId() {
return id;
}
The generated DDL looks like this:
id bigint generated by default as identity (start with 1)
What I'd like to do is make the ID start with a value of 10,000 via annotations.
Note: This is a new application so I'm using the latest versions of Hibernate and HSQLDB.
As this question was not initially tagged as Hibernate, it was probably missed by Hibernate experts. I just noticed the Hibernate HSQLDialect returns the (start with 1) for all identity creation. Therefore there is probably no way to override this.
You should be looking into how to execute an SQL statement to reset the identity AFTER the table is created but before data is added. The SQL looks like:
ALTER TABLE hibernate_generated_table_name ALTER COLUMN id RESTART WITH 10000
Related
I'm using Oracle's Autonomous Database service, with ORDS providing the REST functionality.
When making updates to a table (docs here), when I have an identity column id that is GENERATED ALWAYS, it seems the POST request even when not supplying a id value in the request body, gets parsed by the REST service as id: null.
This then gives me Error Message: ORA-32795: cannot insert into a generated always identity column ORA-06512: at line 4.
Using a SQL statement to insert into the table without specifying the id column works as expected.
Is there a way to keep the identity column always generated (so the ID of a new row cannot be specified), while allowing for POST updates?
Auto Rest functionality will always generated all columns, so there is no other solution rather than
Develop your own POST method and omit on it the IDENTITY column
Change the IDENTITY TYPE, for example from GENERATED ALWAYS to GENERATED BY DEFAULT ON NULL, thereby Oracle will create a value when you set it to null.
I would go for the second.
ALTER TABLE IDENTITY_TABLE MODIFY ( ID GENERATED BY DEFAULT ON NULL AS IDENTITY );
You have a great post from Jeff Smith explaining this situation
AUTO POST and IDENTITY COLUMNS
I'm porting a SQL Server based app to Oracle. Our Oracle DBA has given me a schema that was supposed to be identical to the original SQL Server schema (and generated from it), but the auto generated keys are missing. I am trying to alter these table PK's from a normal INT to incrementing. I am doing so with Oracle SQL Developer 4.0.3 and Oracle 12c.
The error I receive is ORA-01442: column to be modified to NOT NULL is already NOT NULL
I get this after editing the table, selecting the column and setting it's Identity dropdown to 'Generated as Identity'. I am not sure why SQl Developer is attempting to make it not null when it's already a PK.
My questions are: Is this the proper way to setup a generated key? How can I get around this? If I go alter all the required columns, can the DBA use the schema to regenerate whatever procedure he used to create it in the first place to allow proper generated keys and is there a better solution for creating a good schema to go forward with?
Thanks.
If the column is already definied as NOT NULL there is no need to re-defined it as NOT NULL. Therefore you get the error ora-01442.
The best way to obtain sequence values, such as identity in SQL Server, is define the column with default sequence, before inserting row:
CREATE SEQUENCE SEQ_NAME
START WITH 1
INCREMENT BY 1
NOCACHE
NOCYCLE;
ALTER TABLE table_name MODIFY column_name INT DEFAULT SEQ_NAME.NEXTVAL;
PD: This DEFAULT works with 12 c. To 11g or less, you must create a trigger
I would like to keep data which is already saved into a Table field varchar(32) and to convert it to BLOB in Firebird database.
I am using a software: IBExpert ....
If it is possible, how to do that ?
Let's consider you have table TEST with one column NAME:
create table test (name varchar(32));
insert into test values('test1');
insert into test values('test2');
insert into test values('test3');
commit;
select * from test;
It is possible to change the column from varchar to BLOB by the following script:
alter table test add name_blob blob;
update test set name_blob = name;
commit;
alter table test drop name;
alter table test alter column name_blob to name;
commit;
select * from test;
Specifically in IBExpert it easy to do with Firebird 2.1 and Firebird 2.5 via direct modifications of system tables (this "obsolete" method was prohibited in Firebird 3 but nothing was introduced to replace it).
This works in both directions, VARCHAR to BLOB and BLOB to VARCHAR.
You have to have a DOMAIN - that is, a named data type, in SQL, and that domain should be of BLOB type - then IBExpert itself would issue the command, that Firebird 2.x executes, if you set Firebird 2.x in database options.
If you don't have IB Expert then the following command you have to issue:
CREATE DOMAIN T_TEXT_BLOB AS -- prerequisite
BLOB SUB_TYPE 1;
update RDB$RELATION_FIELDS -- the very action
set RDB$FIELD_SOURCE = 'T_TEXT_BLOB'
/* the name of the blob domain to be set as the field's new data type */
where RDB$FIELD_SOURCE = '.....' /* name of the column */
and RDB$RELATION_NAME = '....' /* name of the table */
See also:
https://firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-ddl-domn.html
https://firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref-appx04-relfields.html
When you create a column or change its datatype without explicitly naming the type (like just varchar(10)) then under the hood Firebird creates automatically-managed one-per-column user domains with names like RDB$12345, so while you can do that too, perhaps having an explicit named domain would be more practical and more safe.
This method however fails on Firebird 3, where you do have to copy the whole table, as shown by Maxim above.
http://tracker.firebirdsql.org/browse/CORE-6052
FB devs warn about some "bugs" and "it never worked properly" however refuse to give details.
UPDATE I finally managed to reproduce the bug mentioned in the tracker.
At least in Firebird 2.1 reference counting is broken w.r.t. BLOB payloads.
So the trick seems to be to instantly re-write implicit blobs into explicit, tricking Firebird to think we supplied new content for all the BLOB values.
Assuming the names from Maxim's answer above...
To trigger and demonstrate the bug take the "vanilla" VarChar database, apply the conversion described above, and issue the following commands:
update test set name = name -- trying to force data conversion, but instead forcing Firebird reference counting bug
select cast( name as VarChar(200) ) from test - or actually any command that would try to actually read the field contents - any such attempt would be shot down with the infamous invalid BLOB ID Firebird error.
To work around the bug we must prevent Firebird from (broken) reference counting. So, we must do a fake update, invoking expression evaluator, so that Firebird optimizer would loose track of value sources and would fail to realize the data was not really changed.
update test set name = '' || name || '' -- really forcing data over-writing and conversion, bypassing BLOB reference counting.
select cast( name as VarChar(200) ) from test - now works like a charm (albeit 200 was too short a string and you would be stuck with "overflow" error
The update command can be any other triggering expression evaluator, for example update test set name = cast( name as VarChar( NNN ) ) - but you would need to devise large enough NNN for specific column of specific table. So, string concatenations with empty string is universal and does the work, at least on Firebird 2.1.
The above stands for Firebird 2.1.7 Win32. I did not manage to trigger "invalid BLOB id" with Firebird 2.5.8 Win64 - it "just worked".
At least using the single-connection schema update script, which is anyway the intended way to do database upgrades.
Maybe if I would do schema upgrades while simultaneously there would be users actively working - the FB 2.5 would get broken too, don't know.
Whether to use this shortcut risky way disregarding FB developers' hints, or to use "official" Maxim's answer, possibly dismounting and then re-creating half the database which happened to have "dependencies" upon the field to be dropped, stays up to the reader.
When I add a new record I want SQL Server to automatically add a fresh ID.
There are already some records which have been migrated over (from Access) and until I finish preparing the server for other required functionality I will be manually migrating further records over (if this affects any possible answers).
What are the simplest ways to implement this.
The simplest way would be to make the column an IDENTITY column. Here is an example of how to do this (it's not as simple as ALTER TABLE).
Make use of the Identity field type. This will automatically create a value for you using the next available number in the sequence.
Here is an example of how to create an Identity column (add a new column) on an existing table
ALTER TABLE MyTable ADD IdColumn INT IDENTITY(1,1)
I have an HSQLDB database with a generated ID and I want the auto-incrementing values to always be above 100,000. Is this possible with HSQLDB? Is this possible with any database?
According to the HSQL Documentation:
Since 1.7.2, the SQL standard syntax
is used by default, which allows the
initial value to be specified. The
supported form is( INTEGER
GENERATED BY DEFAULT AS IDENTITY(START
WITH n, [INCREMENT BY m])PRIMARY KEY,
...). Support has also been added for
BIGINT identity columns. As a result,
an IDENTITY column is simply an
INTEGER or BIGINT column with its
default value generated by a sequence
generator.
...
The next IDENTITY value to be used can
be set with the
ALTER TABLE <table name> ALTER COLUMN <column name> RESTART WITH <new value>;
Here's how to do it in HSQLDB:
The next IDENTITY value to be used can be changed with the following statement. Note that this statement is not used in normal operation and is only for special purposes, for example resetting the identity generator:
ALTER TABLE ALTER COLUMN <column name> RESTART WITH <new value>;
As far as I know, all SQL databases allow you to set a seed value for the auto-increment fields.
Update: Here's a list of identity/auto-increment implementations in the major SQL databases.
It is possible with SQL Server. When defining an auto number column you can define the starting number and the increment:
IDENTITY(100000, 1)
I know it's possible with SQL Server, and I imagine it's possible with others.
With SQL Server you can set the ID column seed (starting number) and increment value.
You can do it with databases that use sequences, like Oracle and PostgreSQL. You specify a start value when you create the sequence.
This suggests that you can do it with HSQL as well.
Not sure about HSQL, but in MS SQL yes it's possible. Set the ID to auto increment and set the seed value to 100,000.