How to create a persistence HSQL database? - hsqldb

I am new to HSQLDB and want to create a persistent database so that when opening HSQLDB, the already created tables with their contents exist. I read about HSQLDB in the documentation, but cannot find the information. My current HSQLDB.bat is the following:
cd %HSQLDB%
java -cp ../lib/hsqldb.jar org.hsqldb.util.DatabaseManagerSwing
REM java -cp ../lib/hsqldb.jar org.hsqldb.server.Server
REM java -classpath lib/hsqldb.jar org.hsqldb.server.Server --database.0 file:hsqldb/demodb --dbname.0 testdb
When running the batch, the following dialogue opens:
Note that the URL filed contains jdbc:hsqldb:mem:..
After [OK], I call two scripts, one creating two tables, the other filling them with test data. But reopening HSQLDB does not read the tables with their data.
What is missing, what is wrong? A SHUTDOWN does not change anything. I also replaced mem with file, but could not make the database persistent. What did I oversee in the HSQLDB guide?

Use the drop-down list for Type: and select HSQL Database Engine Standalone, then edit the suggested URL to add your database file path such as jdbc:hsqldb:file:///C:/Program Files/hsqldb-2.5.0/hsqldb/data/dbname where the last part of the path is the name of the database files.
Absolute paths are preferred to avoid dependency on the directory in which you execute your batch.

Related

how to put name difference for daily backup

I created a backup cmd file with this code
EXPDP system/system EXCLUDE=statistics DIRECTORY=bkp_dir DUMPFILE=FULLDB.DMP LOGFILE=FULLDB.log FULL=Y
it works good, but, when I run the backup again, it finds that the file exists
and terminate the process. it will not run unless I delete the previous file or rename it. I want to add something to the dumpfile and logfile name that creates a daily difference between them, something like the system date, or a copy number or what else.
The option REUSE_DUMPFILES specifies whether to overwrite a preexisting dump file.
Normally, Data Pump Export will return an error if you specify a dump
file name that already exists. The REUSE_DUMPFILES parameter allows
you to override that behavior and reuse a dump file name.
If you wish to dump separate file names for each day, you may use a variable using date command in Unix/Linux environment.
DUMPFILE=FULLDB_$(date '+%Y-%m-%d').DMP
Similar techniques are available in Windows, which you may explore if you're running expdp in Windows environment.

Run a initial Liquibase script

This is my 2nd day using Liquibase.
I have a 'backup' or 'Repositry' with the database that I need to create locally on my PC.
I have looked at the documentation, but Im realy not 100% clear on how to run it.
Ive updated the Liquibase.properties file to reflect the correct paths and username and passwords.
How do you run the update command to generate the tables and test data.
Windows 7
The Liquibase documentation on 'Adding Liquibase to an existing project' is probably the best place to start. Basically, you want to set the properties file so that it refers to the existing 'backup' database, and then run liquibase generateChangeLog
This will connect to the existing database and generate a file that contains the structure of the existing database expressed (typically) in an XML file called a changelog. You then create a new properies file that will connect to your local database and use liquibase update to apply the changelog to the local database and populate the structure. Note that this does not typically transfer the data from the existing database to the new database, just the structure - the tables, keys, indexes, etc. If you want to have test data as well, you can either export that data from the existing database, or you might look into crafting the changesets manually. To export the data, a command like this would be used:
java -jar liquibase.jar --changeLogFile="./data/<insert file name> " --diffTypes="data" generateChangeLog

Execute service builder generated sql file on postgresql

I would like to execute sql files generated by the service builder, but the problem is that the sql files contains types like: LONG,VARCHAR... etc
Some of these types don't exist on Postgresql (for example LONG is Bigint).
I don't know if there is a simple way to convert sql file's structures to be able to run them on Postgresql?
execute ant build-db on the plugin and you will find sql folder with vary vendor specific scripts.
Daniele is right, using build-db task is obviously correct and is the right way to do it.
But... I remember a similar situation some time ago, I had only liferay-pseudo-sql file and need to create proper DDL. I managed to do this in the following way:
You need to have Liferay running on your desktop (or in the machine where is the source sql file), as this operation requires portal spring context fully wired.
Go to Configuration -> Server Administration -> Script
Change language to groovy
Run the following script:
import com.liferay.portal.kernel.dao.db.DB
import com.liferay.portal.kernel.dao.db.DBFactoryUtil
DB db = DBFactoryUtil.getDB(DB.TYPE_POSTGRESQL)
db.buildSQLFile("/path/to/folder/with/your/sql", "filename")
Where first parameter is obviously the path and the second is filename without .sql extension. The file on disk should have proper extension: must be called filename.sql.
This will produce tables folder next to your filename.sql which will contain single tables-postgresql.sql with your Postgres DDL.
As far as I remember, Service Builder uses the same method to generate database-specific code.

Connect databases to HSQLDB Server

I'm using version hsqldb-1.8 in a stand-alone way. I want to start the server with some databases. I'd like to use the command line from the HSQLDB documentation :
java -cp ../lib/hsqldb.jar org.hsqldb.Server -database.0 file:mydb -dbname.0 xdb
My problem is I have the script of each database I need and this command line creates a new database, without using my script. May the problem comes from one of these points :
The location of my script
The extension of my script which is a SQL-file
Something wrong or missing in the command line
The command line can't do it => If yes, is there any way to do it ?
I'd like to stay in the console to do all that. That way, I'll only have to launch a script to do all the job. Any help will be much appreciated ! Thanks
I found the solution of my issue :)
I've created a server.properties file located in the directory ../hsqldb-1.8.10/hsqldb/ which contains that :
server.database.0=file:mydb;user=test;password=test
server.dbname.0=mydb
I've also created the mydb.script file with that code in it:
CREATE SCHEMA PUBLIC AUTHORIZATION DBA
CREATE MEMORY TABLE MYDB(ID BIGINT NOT NULL,VERSION INTEGER NOT NULL,NOM VARCHAR(255))
CREATE USER TEST PASSWORD "TEST"
GRANT DBA TO TEST
SET WRITE_DELAY 10
SET SCHEMA PUBLIC
INSERT INTO MYDB VALUES(1,0,'test')
Then, I launch the HSQLDB Server with this command:
java -cp ../lib/hsqldb.jar org.hsqldb.Server
We can see that the database is successfully created :
[Server#10f0f6ac]: Database [index=0, id=0, db=file:mydb, alias=mydb] opened successfully in 313 ms.
To check if the database really contains my data, I use the HSQLDB DatabaseManager tool with this command:
java -cp ../lib/hsqldb.jar org.hsqldb.util.DatabaseManager
To connect:
URL : jdbc:hsqldb:file:mydb
User : test
Password : test
After that, we are connected to the database. Execute the command SELECT * FROM MYDB; and we can see the line of the database.
Hope that will help ! :)
The easist way to connect with with HSQLDB (IMDB)
Connection String
<property name="driverClassName" value="org.hsqldb.jdbcDriver" />
<property name="url" value="jdbc:hsqldb:file:/home/vikask/elmo/db/elmo;" />
<property name="username" value="sa" />
<property name="password" value="" />
In this simple Java project to demonstrate Hibernate, HSQL and Maven using Java Annotations. HSQL database is used to make the project simple, as we can use in-memory database and we would need only a JAR file to be included in our project.
To connect to an embedded HSQLDB database, select the JDBC (HSQLDB Embedded) connection type from the connection type list. Enter any login information if applicable, and then specify whether to use an existing embedded database, or to have HSQLDB create a new embedded database.
If the embedded database already exists, browse to the directory where the database files are located (such as database_name.log, database_name.script, and database_name.properties) and select the database_name.script file.
If the database does not exist, type in or browse to create a new location for the HSQLDB database. HSQLDB will then create the necessary files with the prefixed with the database name typed in. For example, if typing /home/vikask/sample as the location of the database, HSQLDB will create a file called sample.properites, and perhaps sample.log, etc. The actual name of the database is simply sample in this case.
HSQLDB creates a file with the .script extension for its internal use. This is not something that you create.
First run the server and connect to one of the databases. The database will be empty at this point. Then use the utilitly, SQLTool which is in the HSQLDB zip package to execute YOUR script on the database. All the tables and data that are created by your script are persisted in the HSQLDB database.

Export DB with PostgreSQL's PgAdmin-III

How to export a Postgresql db into SQL that can be executed into other pgAdmin?
Exporting as backup file, doesn't work when there's a difference in version
Exporting as SQL file, does not execute when tried to run on a different pgAdmin
I tried exporting a DB with pgAdmin III but when I tried to execute the SQL in other pgAdmin it throws error in the SQL, when I tried to "restore" a Backup file, it says there's a difference in version that it can't do the import/restore.
So is there a "safe" way to export a DB into standard SQL that can be executed plainly in pgAdmin SQL editor, regardless of which version it is?
Don't try to use PgAdmin-III for this. Use pg_dump and pg_restore directly if possible.
Use the version of pg_dump from the destination server to dump the origin server. So if you're going from (say) 8.4 to 9.2, you'd use 9.2's pg_dump to create a dump. If you create a -Fc custom format dump (recommended) you can use pg_restore to apply it to the new database server. If you made a regular SQL dump you can apply it with psql.
See the manual on upgrading your PostgreSQL cluster.
Now, if you're trying to downgrade, that's a whole separate mess.
You'll have a hard time creating an SQL dump that'll work in any version of PostgreSQL. Say you created a VIEW that uses a WITH query. That won't work when restored to PostgreSQL 8.3 because it didn't support WITH. There are tons of other examples. If you must support old PostgreSQL versions, do your development on the oldest version you still support and then export dumps of it for newer versions to load. You cannot sanely develop on a new version and export for old versions, it won't work well if at all.
More troubling, developing on an old version won't always give you code that works on the new version either. Occasionally new keywords are added where support for new specification features are introduced. Sometimes issues are fixed in ways that affect user code. For example, if you were to develop on the (ancient and unsupported) 8.2, you'd have lots of problems with implicit casts to text on 8.3 and above.
Your best bet is to test on all supported versions. Consider setting up automated testing using something like Jenkins CI. Yes, that's a pain, but it's the price for software that improves over time. If Pg maintained perfect backward and forward compatibility it'd never improve.
Export/Import with pg_dump and psql
1.Set PGPASSWORD
export PGPASSWORD='123123123';
2.Export DB with pg_dump
pg_dump -h <<host>> -U <<username>> <<dbname>> > /opt/db.out
/opt/db.out is dump path. You can specify of your own.
3.Then set again PGPASSWORD of you another host. If host is same or password is same then this is not required.
4.Import db at your another host
psql -h <<host>> -U <<username>> -d <<dbname>> -f /opt/db.out
If username is different then find and replace with your local username in db.out file. And make sure on username is replaced and not data.
If you still want to use PGAdmin then see procedure below.
Export DB with PGAdmin:
Select DB and click Export.
File Options
Name DB file name for you local directory
Select Format - Plain
Ignore Dump Options #1
Dump Options #2
Check Use Insert Commands
Objects
Uncheck tables if you don't want any
Import DB with PGAdmin:
Create New DB.
By keeping selected DB, Click Menu->Plugins->PSQL Console
Type following command to import DB
\i /path/to/db.sql
If you want to export Schema and Data separately.
Export Schema
File Options
Name schema file at you local directory
Select Format - Plain
Dump Options #1
Check Only Schema
Check Blobs (By default checked)
Export Data
File Options
Name data file at you local directory
Select Format - Plain
Dump Options #1
Check Only Data
Check Blobs (By default checked)
Dump Options #2
Check Use Insert Commands
Check Verbose messages (By default checked)
Note: It takes time to Export/Import based on DB size and with PGAdmin it will add some more time.