Viewing a grails schema while it runs in memory? - sql

I want to view a grails schema for the default hsqldb in-memory database, but when I connect to the in-memory databse with SquirrelSQL or DbVisualizer as userid: sa, password: (nothing), I only see two schemas:
INFORMATION_SCHEMA
PUBLIC
And neither contains my Domain tables. What's going on?

You need to set the hsqldb database to file, and set shutdown to true, as outlined here.

If you want to access the in-memory database, there's a writeup on how to do that here: http://act.ualise.com/blogs/continuous-innovation/2009/07/viewing-grails-in-memory-hsqldb/
There's also a new plugin that gives you access to a web-based database console that can access any database that you have a JDBC driver for, including the in-memory hsql db. The plugin docs are at http://grails.org/plugin/dbconsole and you install it the usual way, i.e. grails install-plugin dbconsole. Unfortunately the plugin has an artificial restriction to Grails 1.3.6+, so if you're using an older version of Grails you can use the approach from the blog post that inspired the plugin, http://burtbeckwith.com/blog/?p=446
To use the database console, select "Generic HSQLDB" from the settings dropdown and change the values to match what's in DataSource.groovy. This will probably just require changing the url to jdbc:hsqldb:mem:devDB

You need to set up a shared hsql database: Creating a shared HSQLDB database
edit: There is NO way to expose in-memory hsqldb. Either create a Server or WebServer or use file URL.

Related

Migrate from H2 to PostgreSQL

I need to replace H2 with PostgreSQL at the WSO2 API Manager. Since there is currently data saved on H2, I need to pass it to PostgreSQL.
I found the command
SCRIPT TO 'dump.sql'
to export the data to .sql files, but I could not use it because I was not given the credentials to access the database, so I had to retrieve the data from the .mv.db files that H2 generates. On those files the data is not encrypted, but the password obviously is. To export the data to .sql files I used the command
java -cp h2-*.jar org.h2.tools.Recover -dir file_path -db file_name.
The .sql files are generated correctly, but when I try to import them into PostgreSQL with the command
psql -U db_user db_name < dump_name.sql
numerous syntax errors come up, probably due to the imcompatibility of H2 and PostgreSQL dialects. Is there a way to export the data so that it can then be imported into PostgreSQL? Alternatively, would there be an alternative way to migrate the data?
This is changing the database vendor and we don't support such use cases. There are different scripts in the /[PRODUCT_HOME]/dbscripts folder and you need to setup the target database (in your case PostgreSQL) using the correct scripts. This is due to the nature of differences between different database vendors. The datatypes and schema are different from one database vendor to another.
The correct approach is to go through the migration. You can setup a new environment with PostgreSQL and use a 3rd-party tool or a tool provided by the database vendor to migrate data from H2 to ProstgreSQL. There is no straightforward method to change the database from H2 to PostgreSQL.
For more information on the product migration - https://apim.docs.wso2.com/en/latest/install-and-setup/upgrading-wso2-api-manager/upgrading-guidelines/
WSO2 does not have any scripts or tools for cross-db migrations. However, you can use the API controller[1] to migrate APIs, Applications from the previous environment with H2 DB to a new one with PostgreSQL.
[1] - https://apim.docs.wso2.com/en/latest/install-and-setup/setup/api-controller/getting-started-with-wso2-api-controller/

Keycloak - Using a UUID datatype in Postgres instead of VARCHAR(36)

One thing that popped out at me immediately when I connected Postgres SQL to my Keycloak realm was that UUIDs, specifically user IDs, are stored as a VARCHAR(36).
Sure, it makes sense as VARCHAR is nearly universal between databases and UUID isn't. But I really need not explain why I should be using the UUID type instead of VARCHAR. My question is, what's the best way to do this with Keycloak? Is it even possible to customize the data types like this in Keycloak?
Keycloak uses Liquibase to manage database schemas, so I would say you need custom Liquibase changelogs to customize default DB schema.
See How do i update the Liquibase scripts for the kecloak 12.0.4 image?
Keep in mind that your DB customization might be a problem for future vendor Liquibase changelogs + Keycloak uses Infinispan cache for DB operations, so IMHO DB schema optimization will provide low Keycloak performance improvements.

MongoDB connection with Pentaho Kettle (PDI)

I've just downloaded Pentaho Data Integration Community (pdi-ce-6.1.0.1-196) a.k.a. Kettle, with the goal of designing an ETL routine to make nightly migrations from MongoDB scheme into PostgreSQL.
I couldn't achieve the very first task: create a MongoDB connection. MongoDB is not listed as a Connection Type in the New Connection dialog, so I chose Generic database. Then, I failed to find anything related to MongoDB in the Custom Driver Class Name field required for the generic connection.
Is it possible that the installation/configuration went wrong with Kettle? I remember that I had to kill the first startup because it hanged forever.
Or does PDI-CE lacks some component that I must get somewhere else?
PDI handles Mongodb differently than other databases.
If working on a transformation (vs a job), go to the "Big Data" group of steps and there are two steps - one for MongoDB Input and one for MongoDB Output.
Within those steps you specify the connection information to your database.
Hope that helps,
Mark
P.S. There is also a "MongoDB Delete" in the marketplace that comes in useful when deleting data from collections.

How to migrate the data from Magnolia CMS Apache Jackrabbit content repository to normal SQL SERVER database

I am new to Magnolia CMS and the Apache Jackrabbit content repository concepts.
There is a web application which is using Magnolia CMS. Magnolia is using SQL SERVER 2012 database as persistence manager.
Here Apache Jackrabbit content repository implementation is done. There are two separate configurations of the Magnolia CMS which are used for the application, referred to as the public and author instances.
Now here we are trying to replace the existing Magnolia CMS with a custom ASP.NET MVC 5 application with all the functionalities.
I analysed the tables in the SQL SERVER database and found that data stored in format of Node_ID and Bundle_Data which is very difficult to analyse.
In short, it is not easy to interpret.
Based on the custom CMS a new database model for author instance (SQL SERVER 2012) is developed.
Hence as part of migration task ,I am trying to migrate the old data that is stored in the SQL SERVER with the Apache Jackrabbit content repository implementation to a normal SQL SERVER 2012 (as per the new database model).
Can anyone help me to know are there are any proven methods or tools available to accomplish this task.
The question is more on the jackrabbit-side, not so much on the Magnolia side, especially since you want to replace Magnolia entirely, not just the persistence layer:
Now here we are trying to replace the existing Magnolia CMS with a
custom ASP.NET MVC 5 application with all the functionalities.
although my question really is whether you really want to replace Jackrabbit entirely, or still use Jackrabbit with your ASP.NET application but with a MS SQL Server datastore (which would be my personal suggestion)? Otherwise you will be getting rid of all the benefits that Jackrabbit has.
Jackrabbit does support SQL Server and I would suggest to use it.
https://wiki.apache.org/jackrabbit/DataStore#Configuration-1:
Currently supported are: db2, derby, h2, mssql, mysql, oracle,
sqlserver.
Developing a WebCMS with just ASP.NET and SQL Server and without a content repository layer in between sounds like developing everything that a WebCMS usually comes with from scratch, especially if you want to have all the functionality that Magnolia offers (versioning, history, search, etc.).
You can check details regarding Jackrabbit data store here: http://wiki.apache.org/jackrabbit/DataStore although I am wondering why you or your customer would want to change the data store of the content repository to SQL Server. I guess you are not speaking of using MySQL for the persistence of the meta data, but really to store the binary content (a mistake that by the way OpenCms, another Java-based open source WebCMS, made in their architecture design - imho).
Note that usually large files are not stored in the database itself (with Magnolia), but on the file system.
https://wiki.magnolia-cms.com/display/WIKI/Setting+up+a+Jackrabbit+persistence+manager#SettingupaJackrabbitpersistencemanager-Datastorageandbackup:
BLOBs are not by default stored in the database when they exceed a
certain threshold definied in your Jackrabbit configuration - instead
they are saved on the file system. The default threshold used by a
Magnolia installation is 1024 bytes. All files above the defined
threshold are put onto the filesystem and not in the database.
In case you really want to get rid of Jackrabbit entirely and only use SQL Server as the persistence layer and store all binary content in it regardless of size (not recommended), I would write a custom export/import script for it, which queries the Jackrabbit repo (standard CMIS protocol) and takes the content from the file system, reading as FileInputStream and writing it to the Oracle DB (Example: http://www.java2s.com/Code/Java/Database-SQL-JDBC/StoreBLOBsdataintodatabase.htm). This would be my suggested method.
I don't think there are any out-of-the-box tools for that.

Bluemix sql database recovery

When I removed my app from the Bluemix dashboard, it removed the associated SQL db as well. I have a script that creates new tables/indexes with our schema name but the free version of SQL database does not support user-defined schema names. The problem is in our code, we need to have our schema name rather than user*** schema name.
Does Bluemix still offer small version of SQL database ? If not, is there a way to recover our database, or is there a way I can rename the user*** schema created by the free version to the name I want?
Unfortunately it is not possible to use a user defined schema name. Anyway as a general rule in development, properties like schema name or connection properties should be parametric, in order to have more flexibility in your solution.
What is preventing to have your SQL to be adapted to the new db instance? You could have a simple script which load it and run on the instance, without any need to use an hardcoded schema name