Oracle 11gR2
Linux RHEL 6.3
Subversion 1.7
Trying to find a build tool like 'make', 'ant', 'maven' for Oracle PL/SQL or SQL that will allow me to build my Oracle PL/SQL and SQL application. Seems I can't find tool that will, for example, maintain the precedence needed to run my SQL (e.g. DDL) in the correct order. I can compare differences between two schemas and generate DDL that will sync the two schemas. But the order that this DDL is generated in does not take into account precedence -- e.g. parent table build should occur before child table but instead the DDL output is in alphabetical order.
Any ideas?
I use make and a make file like this. But I have the db password in the file! Also all ddl creation files are separate in the same folder. You need at least the oracle instant client with sqlplus installed.
# When on Windows and starting GNU make from Git bash, we need to set this:
ifdef COMSPEC
SHELL=C:/Windows/System32/cmd.exe
endif
export ORASYSDBA="sys/oracle#192.168.0.112:1521/orcl as sysdba"
uninstall:
sqlplus ${ORASYSDBA} #uninstall.sql
install: tablespaces users directories sequences package_reapi_headers tables types views sysgrants package_headers package_bodies
sqlplus ${ORASYSDBA} #create_directories.sql
tablespaces:
sqlplus ${ORASYSDBA} #create_tablespaces.sql
users:
sqlplus ${ORASYSDBA} #create_users.sql
You then simply call it by
make install
or
make users
Related
I am new to HSQLDB and want to create a persistent database so that when opening HSQLDB, the already created tables with their contents exist. I read about HSQLDB in the documentation, but cannot find the information. My current HSQLDB.bat is the following:
cd %HSQLDB%
java -cp ../lib/hsqldb.jar org.hsqldb.util.DatabaseManagerSwing
REM java -cp ../lib/hsqldb.jar org.hsqldb.server.Server
REM java -classpath lib/hsqldb.jar org.hsqldb.server.Server --database.0 file:hsqldb/demodb --dbname.0 testdb
When running the batch, the following dialogue opens:
Note that the URL filed contains jdbc:hsqldb:mem:..
After [OK], I call two scripts, one creating two tables, the other filling them with test data. But reopening HSQLDB does not read the tables with their data.
What is missing, what is wrong? A SHUTDOWN does not change anything. I also replaced mem with file, but could not make the database persistent. What did I oversee in the HSQLDB guide?
Use the drop-down list for Type: and select HSQL Database Engine Standalone, then edit the suggested URL to add your database file path such as jdbc:hsqldb:file:///C:/Program Files/hsqldb-2.5.0/hsqldb/data/dbname where the last part of the path is the name of the database files.
Absolute paths are preferred to avoid dependency on the directory in which you execute your batch.
I have been using SAP HANA db instance, and have been running several queries on this. I need to extract the query-history, preferably from a system-table or elsewhere. Please let me know if this is possible and any pointers to achieve it, if possible.
If you want a detailed history of executed queries, you need to activate the HANA SQL trace. You can find more information in the HANA documentation. Of course, this will not work retrospectively. So you will have to activate the trace first and then run the queries that you want to look at.
Additionally, the SQL Plan Cache provides aggregated information about past queries. It is aggregated by the prepared statements and provides runtime information like average execution time and result size. The monitoring view for this is SYS.M_SQL_PLAN_CACHE.
you can trace DDL statements by querying M_EXECUTED_STATEMENTS view, which is located in SYS schema, note that your used needs to have select permission on this view to be able to query it
you can enable dumping executed SQL statements into a flatfile in hana studio and then either use grep on those flatfiles in bash or query the M_TRACEFILE_CONTENTS view (once again from SYS schema)
Note that trace files are very messy and you need proper grep skills to be able to extract the executed SQL statements from it - I didn't figure out yet how to configure HANA database to generate pretty trace files
handy grep commands forc
finding trace files:
# find / -name *.trc # finding trace files
$ grep -n -B 5 -A 1 '^.*select.*$' flatfile # displays matches in a flatfile with context and line numbers ( surrounding five lines above and 1 line below )
$ grep -n -B 5 -A 1 '^.*select\|84443781510009.*$' flatfile # <- or statement for keywords with \| characters
hana studio allows you to apply configuration to tracing behavior (tracing only for a given user, object etc) it is best to change this behavior from hana studio / hdbsql level
As mentioned earlier M_SQL_PLAN_CACHE and M_SQL_PLAN_CACHE_RESET system views allow for handy querying and retrieving executed sql statements as well as their statistics.
I'm migrating a database from DB2 10.1 for Windows x86_64 to DB2 10.1 for Linux x86_64 - this is a combination of operating systems and machine types that have incompatible backup file formats, which means I can't just do a backup and restore.
Instead, I'm using db2move to backup the database from Windows and restore it on Linux. However, db2move does not move the materialized query tables (MQTs). Instead I need to use db2look. This poses the challenge of finding a generic method to handle the process. Right now to dump the DDLs for the materialized queries I have to run the following commands:
db2 connect to MYDATABASE
db2 -x "select cast(tabschema || '.' || tabname as varchar(80)) as tablename from syscat.tables where type='S'"
This returns a list of MQTs such as:
MYSCHEMA.TABLE1
MYSCHEMA.TABLE2
MYOTHERSCHEMA.TABLE3
I can then take all those values and feed them into a db2look to generate the DDLs for each table and send the output to mqts.sql.
db2look -d MYDATABASE -e -t MYSCHEMA.TABLE1 MYSCHEMA.TABLE2 MYOTHERSCHEMA.TABLE3 -o mqts.sql
Then I copy the file mqts.sql to the target computer, which I've previously restored all the non-MQTs, and run the following command to restore the MQTs:
db2 -tvf mqts.sql
Is this the standard way to migrate a MQT? There has got to be a simpler way that I'm missing here.
db2move is mainly to migrate data, and things related to that data, for example the DDL of each table, etc. db2move does not even migrate the relation between tables, so you have to recreated them with the ddl.
Taking the previous thing into account, an MQT is just a DDL, it does not have any data. The tool to deal with DDLs is db2look, and it has so many options to extract exactly what you want.
The process you indicated is a normal process to extract that DDL. However, I have seen more difficult processes than yours, dealing with DDLs and with db2more/db2look; yours is "simple".
Another option is to use Data Studio, however you cannot script that.
I believe what you are doing is right because MQTs do not have data of their own and are populated from the base tables. So the process should be to migrate data into the base tables which the MQT is referring and then simple create/refresh the MQTs.
How to export a Postgresql db into SQL that can be executed into other pgAdmin?
Exporting as backup file, doesn't work when there's a difference in version
Exporting as SQL file, does not execute when tried to run on a different pgAdmin
I tried exporting a DB with pgAdmin III but when I tried to execute the SQL in other pgAdmin it throws error in the SQL, when I tried to "restore" a Backup file, it says there's a difference in version that it can't do the import/restore.
So is there a "safe" way to export a DB into standard SQL that can be executed plainly in pgAdmin SQL editor, regardless of which version it is?
Don't try to use PgAdmin-III for this. Use pg_dump and pg_restore directly if possible.
Use the version of pg_dump from the destination server to dump the origin server. So if you're going from (say) 8.4 to 9.2, you'd use 9.2's pg_dump to create a dump. If you create a -Fc custom format dump (recommended) you can use pg_restore to apply it to the new database server. If you made a regular SQL dump you can apply it with psql.
See the manual on upgrading your PostgreSQL cluster.
Now, if you're trying to downgrade, that's a whole separate mess.
You'll have a hard time creating an SQL dump that'll work in any version of PostgreSQL. Say you created a VIEW that uses a WITH query. That won't work when restored to PostgreSQL 8.3 because it didn't support WITH. There are tons of other examples. If you must support old PostgreSQL versions, do your development on the oldest version you still support and then export dumps of it for newer versions to load. You cannot sanely develop on a new version and export for old versions, it won't work well if at all.
More troubling, developing on an old version won't always give you code that works on the new version either. Occasionally new keywords are added where support for new specification features are introduced. Sometimes issues are fixed in ways that affect user code. For example, if you were to develop on the (ancient and unsupported) 8.2, you'd have lots of problems with implicit casts to text on 8.3 and above.
Your best bet is to test on all supported versions. Consider setting up automated testing using something like Jenkins CI. Yes, that's a pain, but it's the price for software that improves over time. If Pg maintained perfect backward and forward compatibility it'd never improve.
Export/Import with pg_dump and psql
1.Set PGPASSWORD
export PGPASSWORD='123123123';
2.Export DB with pg_dump
pg_dump -h <<host>> -U <<username>> <<dbname>> > /opt/db.out
/opt/db.out is dump path. You can specify of your own.
3.Then set again PGPASSWORD of you another host. If host is same or password is same then this is not required.
4.Import db at your another host
psql -h <<host>> -U <<username>> -d <<dbname>> -f /opt/db.out
If username is different then find and replace with your local username in db.out file. And make sure on username is replaced and not data.
If you still want to use PGAdmin then see procedure below.
Export DB with PGAdmin:
Select DB and click Export.
File Options
Name DB file name for you local directory
Select Format - Plain
Ignore Dump Options #1
Dump Options #2
Check Use Insert Commands
Objects
Uncheck tables if you don't want any
Import DB with PGAdmin:
Create New DB.
By keeping selected DB, Click Menu->Plugins->PSQL Console
Type following command to import DB
\i /path/to/db.sql
If you want to export Schema and Data separately.
Export Schema
File Options
Name schema file at you local directory
Select Format - Plain
Dump Options #1
Check Only Schema
Check Blobs (By default checked)
Export Data
File Options
Name data file at you local directory
Select Format - Plain
Dump Options #1
Check Only Data
Check Blobs (By default checked)
Dump Options #2
Check Use Insert Commands
Check Verbose messages (By default checked)
Note: It takes time to Export/Import based on DB size and with PGAdmin it will add some more time.
Using Toad for Oracle, I can generate full DDL files describing all tables, views, source code (procedures, functions, packages), sequences, and grants of an Oracle schema. A great feature is that it separates each DDL declaration into different files (a file for each object, be it a table, a procedure, a view, etc.) so I can write code and see the structure of the database without a DB connection. The other benefit of working with DDL files is that I don't have to connect to the database to generate a DDL each time I need to review table definitions. In Toad for Oracle, the way to do this is to go to Database -> Export and select the appropriate menu item depending on what you want to export. It gives you a nice picture of the database at that point in time.
Is there a "batch" tool that exports
- all table DDLs (including indexes, check/referential constraints)
- all source code (separate files for each procedure, function)
- all views
- all sequences
from SQL Server?
What about PostgreSQL?
What about MySQL?
What about Ingres?
I have no preference as to whether the tool is Open Source or Commercial.
For SQL Server:
In SQL Server Management Studio, right click on your database and choose 'Tasks' -> 'Generate Scripts'.
You will be asked to choose which DDL objects to include in your script.
In PostgreSQL, simply use the -s option to pg_dump. You can get it as a plain sql script (one file for the whole database) on in a custom format that you can then throw a script at to get one file per object if you want it.
The PgAdmin tool will also show you each object's SQL dump, but I don't think there's a nice way to get them all at once from there.
For mysql, I use mysqldump. The command is pretty simple.
$ mysqldump [options] db_name [tables]
$ mysqldump [options] --databases db_name1 [db_name2 db_name3...]
$ mysqldump [options] --all-databases
Plenty of options for this. Take a look here for a good reference.
In addition to the "Generate Scripts" wizard in SSMS you can now use mssql-scripter which is a command line tool to generate DDL and DML scripts.
It's an open source and Python-based tool that you can install via:
pip install mssql-scripter.
Here's an example of what you can use to script the database schema and data to a file.
mssql-scripter -S localhost -d AdventureWorks -U sa --schema-and-data > ./adventureworks.sql
More guidelines: https://github.com/Microsoft/sql-xplat-cli/blob/dev/doc/usage_guide.md
And here is the link to the GitHub repository: https://github.com/Microsoft/sql-xplat-cli
MySQL has a great tool called MySQL workbench that lets you reverse and forward engineer databases, as well as synchronize, which I really like. You can view the DDL when executing these functions.
I wrote SMOscript which does what you are asking for (referring to MSSQL Server)
Following what Daniel Vassallo said, this worked for me:
pg_dump -f c:\filename.sql -C -n public -O -s -d Moodle3.1 -h localhost -p 5432 -U postgres -w
try this python-based tool: Yet another script to split PostgreSQL dumps into object files