Migrate from H2 to PostgreSQL - sql

I need to replace H2 with PostgreSQL at the WSO2 API Manager. Since there is currently data saved on H2, I need to pass it to PostgreSQL.
I found the command
SCRIPT TO 'dump.sql'
to export the data to .sql files, but I could not use it because I was not given the credentials to access the database, so I had to retrieve the data from the .mv.db files that H2 generates. On those files the data is not encrypted, but the password obviously is. To export the data to .sql files I used the command
java -cp h2-*.jar org.h2.tools.Recover -dir file_path -db file_name.
The .sql files are generated correctly, but when I try to import them into PostgreSQL with the command
psql -U db_user db_name < dump_name.sql
numerous syntax errors come up, probably due to the imcompatibility of H2 and PostgreSQL dialects. Is there a way to export the data so that it can then be imported into PostgreSQL? Alternatively, would there be an alternative way to migrate the data?

This is changing the database vendor and we don't support such use cases. There are different scripts in the /[PRODUCT_HOME]/dbscripts folder and you need to setup the target database (in your case PostgreSQL) using the correct scripts. This is due to the nature of differences between different database vendors. The datatypes and schema are different from one database vendor to another.
The correct approach is to go through the migration. You can setup a new environment with PostgreSQL and use a 3rd-party tool or a tool provided by the database vendor to migrate data from H2 to ProstgreSQL. There is no straightforward method to change the database from H2 to PostgreSQL.
For more information on the product migration - https://apim.docs.wso2.com/en/latest/install-and-setup/upgrading-wso2-api-manager/upgrading-guidelines/

WSO2 does not have any scripts or tools for cross-db migrations. However, you can use the API controller[1] to migrate APIs, Applications from the previous environment with H2 DB to a new one with PostgreSQL.
[1] - https://apim.docs.wso2.com/en/latest/install-and-setup/setup/api-controller/getting-started-with-wso2-api-controller/

Related

Dump HANA Database using "SAP HANA Web-based Development Workbench"

I'd like to get dump of a HANA DB using the browser based "SAP HANA Web-based Development Workbench".
I'm especially interessted in exporting:
the structure of the tables including primary and foreign key constraints
the data inside the tables
Once I log into the "SAP HANA Web-based Development Workbench", I'm able to open the "catalog" and execute SQL commands like e.g. SELECT * FROM MY_TABLE;. This allows me to download the data from one table as a CSV. But is there also something similar to pg_dump in postgres, a command that exports both table structure and data as for example a tar-compressed .sql file?
You can right click on the database which you would like to backup and select Export.
Be sure to activate the checkbox Including data. I am not sure if it is also necessary to check the Including dependencies checkbox.
You get a zip file which contains the sql-commands to create the tables and seperate data files which contains the content of the tables. Each table is saved in a seperate directory.
The export command seems relevant.
The server will generate .sql files for structure and .csv for data.
If the database is a managed service such as HANA Cloud, you don't have access to the filesystem and should dump the files to an S3 bucket or an azure blob store.
Otherwise, just grab the files from the server box.

how to access a database when the access is restricted to a particular place

There is a student database in Some College.Some Organization wants to access it from their headquarters.
But access is restricted within college only.
Is it possible for you to extract data?
How and what SQL queries and functions for the above?
in network programming in can do by connecting via tcp r udp and extracting information but is t possible if the databasae is large?
how can we do using sql function
One thing you can do is to dump the data and reimport it into your own database. Depends on how big the data is you require. At work I have similar problems and I have to do the same somteimes.
If your admin dumps the data for you, then it is easier. You can also export it with sql commands, but how depends on which database you are using. When you dump it to CSV format, you can import it into a SQLIte datbase easily (or others like MySQL etc.), if you don't have a local DB version of your own database.
An alternative is to export the data yourself into a CSV. How to do this depends on the DB that you use and you didn't mention it. Under Oracle you can use the set and spool command to achieve this.

Viewing a grails schema while it runs in memory?

I want to view a grails schema for the default hsqldb in-memory database, but when I connect to the in-memory databse with SquirrelSQL or DbVisualizer as userid: sa, password: (nothing), I only see two schemas:
INFORMATION_SCHEMA
PUBLIC
And neither contains my Domain tables. What's going on?
You need to set the hsqldb database to file, and set shutdown to true, as outlined here.
If you want to access the in-memory database, there's a writeup on how to do that here: http://act.ualise.com/blogs/continuous-innovation/2009/07/viewing-grails-in-memory-hsqldb/
There's also a new plugin that gives you access to a web-based database console that can access any database that you have a JDBC driver for, including the in-memory hsql db. The plugin docs are at http://grails.org/plugin/dbconsole and you install it the usual way, i.e. grails install-plugin dbconsole. Unfortunately the plugin has an artificial restriction to Grails 1.3.6+, so if you're using an older version of Grails you can use the approach from the blog post that inspired the plugin, http://burtbeckwith.com/blog/?p=446
To use the database console, select "Generic HSQLDB" from the settings dropdown and change the values to match what's in DataSource.groovy. This will probably just require changing the url to jdbc:hsqldb:mem:devDB
You need to set up a shared hsql database: Creating a shared HSQLDB database
edit: There is NO way to expose in-memory hsqldb. Either create a Server or WebServer or use file URL.

Is there a library / tool to query MySQL data files (MyISAM / InnoDB) without the server? (the SQLite way)

Oftentimes I want to query my MySQL data directly without a server running or without having access to the server (but having read / write rights to the files).
Is there a tool or maybe even a library around to query MySQL data files like it is possible with SQLite? I'm specifically looking for InnoDB and MyISAM support. Performance is not a factor.
I don't have any knowledge about MySQL internals, but I presume it should be possible to do and not too hard to get the specific code out?
Thank you for any suggestions!
MySQL offers a client library which is basically a miniature server. It's called libmysqld. It is C/C++ only, though. According to the docs, it exports an identical API to the normal C/C++ client library.
MySQL Embedded client library
I assume you are doing testing/dev work and dont want to run a server.
I had to do this a while ago, and the best I came up with is exporting it to SQL and loading that into memory:
mysqldump -u root -pPASSWORD DATABASENAME TABLENAME > table.sql
HSQLDB is an in-memory relational database for java, you could run the queries into that, do the modification you need and then re-export the .sql file. Bit of a roundabout way of doing it...

How do you version your database schema? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
How do you prepare your SQL deltas? do you manually save each schema-changing SQL to a delta folder, or do you have some kind of an automated diffing process?
I am interested in conventions for versioning database schema along with the source code. Perhaps a pre-commit hook that diffs the schema?
Also, what options for diffing deltas exist aside from DbDeploy?
EDIT: seeing the answers I would like to clarify that I am familiar with the standard scheme for running a database migration using deltas. My question is about creating the deltas themselves, preferably automatically.
Also, the versioning is for PHP and MySQL if it makes a difference. (No Ruby solutions please).
See
Is there a version control system for database structure changes?
How do I version my MS SQL database in SVN?
and Jeff's article
Get Your Database Under Version Control
I feel your pain, and I wish there were a better answer. This might be closer to what you were looking for.
Mechanisms for tracking DB schema changes
Generally, I feel there is no adequate, accepted solution to this, and I roll my own in this area.
You might take a look at another, similar thread: How do I version my MS SQL database in SVN?.
If you are still looking for options : have a look at neXtep designer. It is a free GPL database development environment based on the concepts of version control. In the environment you always work with versioned entities and can focus on the data model development. Once a release is done, the SQL generation engine plugged on the version control system can generate any delta you need between 2 versions, and will offer you some delivery mechanism if you need.
Among other things, you can synchronize and reverse synchronize your database during developments, create data model diagrams, query your database using integrated SQL clients, etc.
Have a look at the wiki for more information :
http://www.nextep-softwares.com/wiki
It currently supports Oracle, MySql and PostgreSql and is in java so the product runs on windows, linux and mac.
I don't manage deltas. I make changes to a master database and have a tool that creates an XML based build script based on the master database.
When it comes time to upgrade an existing database I have a program that uses the XML based build script to create a new database and the bare tables. I then copy the data over from the old database using INSERT INTO x SELECT FROM y and then apply all indexes, constraints and triggers.
New tables, new columns, deleted columns all get handled automatically and with a few little tricks to adjust the copy routine I can handle column renames, column type changes and other basic refactorings.
I wouldn't recommend this solution on a database with a huge amount of data but I regularly update a database that is over 1GB with 400 tables.
I make sure that schema changes are always additive. So I don't drop columns and tables, because that would zap the data and cannot be rolled back later. This way the code that uses the database can be rolled back without losing data or functionality.
I have a migration script that contains statements that creates tables and columns if they don't exist yet and fills them with data.
The migration script runs whenever the production code is updated and after new installs.
When I would like to drop something, I do it by removing them from the database install script and the migration script so these obsolete schema elements will be gradually phased out in new installs. With the disadvantage that new installs cannot downgrade to an older version before the install.
And of course I execute DDLs via these scripts and never directly on the database to keep things in sync.
You didn't mention which RDBMS you're using, but if it's MS SQL Server, Red-Gate's SQL Compare has been indispensable to us in creating deltas between object creation scripts.
I'm not one to toot my own horn, but I've developed an internal web app to track changes to database schemas and create versioned update scripts.
This tool is called Brazil and is now open source under a MIT license. Brazil is ruby / ruby on rails based and supports change deployment to any database that Ruby DBI supports (MySQL, ODBC, Oracle, Postgres, SQLite).
Support for putting the update scripts in version control is planned.
http://bitbucket.org/idler/mmp - schema versioning tool for mysql, writed in PHP
We're exporting the data to a portable format (using our toolchain), then importing it to a new schema. no need for delta SQL. Highly recommended.
I use Firebird database for most development and I use FlameRobin administration tool for it. It has a nice option to log all changes. It can log everything to a one big file, or one file per database change. I use this second option, and then I store each script in version control software - earlier I used Subversion, now I use Git.
I assume you can find some MySQL tool that has the same logging feature like FlameRobin does for Firebird.
In one of database tables, I store the version number of the database structure, so I can upgrade any database easily. I also wrote a simple PHP script that executes those SQL scripts one by one on any target database (database path and username/password are supplied on the command line).
There's also an option to log all DML (insert, update delete) statements, and I activate this while modifying some 'default' data that each database contains.
I wrote a nice white paper on how I do all this in detail. You can download the paper in .pdf format along with demo PHP scripts from here.
I also developed a set of PHP scripts where developers can submit their deltasql scripts to a central repository.
In one of the database tables (called TBSYNCHRONIZE), I store the version number of the latest executed script, so I can upgrade any database easily by using the web interface or a client developed on purpose for Eclipse.
The web interface allows to manage several projects. It supports also database "branches".
You can test the application at http://www.gpu-grid.net/deltasql (if you login as admin with password testdbsync).
The application is open source and can be downloaded here:
http://sourceforge.net/projects/deltasql
deltasql is used productively in Switzerland and India, and is popular in Japan.
Some months ago I searched tool for versioning MySQL schema. I found many useful tools, like Doctrine migration, RoR migration, some tools writen in Java and Python.
But no one of them was satisfied my requirements.
My requirements:
No requirements , exclude PHP and MySQL
No schema configuration files, like schema.yml in Doctrine
Able to read current schema from connection and create new migration script, than represent identical schema in other installations of application.
I started to write my migration tool, and today I have beta version.
Please, try it, if you have an interest in this topic.
Please send me future requests and bugreports.
Source code: bitbucket.org/idler/mmp/src
Overview in English: bitbucket.org/idler/mmp/wiki/Home
Overview in Russian: antonoff.info/development/mysql-migration-with-php-project
I use http://code.google.com/p/oracle-ddl2svn/
I am interested in this topic too.
There are some discussions on this topic in the Django wiki.
Interestingly, it looks like CakePHP has schema versioning built-in using just cake schema generate command.
I am using strict versioning of the database schema (tracked in a separate table). Scripts are stored in version control, but they all verify current schema version before making any change.
Here is the full implementation for SQL Server (the same solution could be developed for MySQL if needed): How to Maintain SQL Server Database Schema Version
For MySQL
When I land on a new DB:
Firstly, I check structure:
mysqldump --no-data --skip-comments --skip-extended-insert -h __DB_HOSTNAME__ -u __DB_USERNAME__ -p __DB1_NAME__ | sed 's/ AUTO_INCREMENT=[0-9]*//g' > FILENAME_1.sql
mysqldump --no-data --skip-comments --skip-extended-insert -h __DB_HOSTNAME__ -u __DB_USERNAME__ -p __DB2_NAME__ | sed 's/ AUTO_INCREMENT=[0-9]*//g' > FILENAME_2.sql
diff FILENAME_1.sql FILENAME_2.sql > DIFF_FILENAME.txt
cat DIFF_FILENAME.txt | less
Thanks to stackoverflow users I could write this quick script to find structure differences.
src : https://stackoverflow.com/a/8718572/4457531 & https://stackoverflow.com/a/26328331/4457531
In a second step, I check datas, table by table with mysqldiff. It's a bit archaic but a php loop based on information_schema datas make job surely
For versioning, I use the same way but I format a SQL update script (to upgrade or rollback) with diff results and I use version number convention (with several modifications the version number look like an ip address).
initial version : 1.0.0
^ ^ ^
| | |
structure change: - | |
datas added: -------- |
datas updated: --------