DEC alpha openvms powerhouse data migration - openvms

I have been charged with determining the requirements to migrate data from applications running OpenVMS on DEC alpha. I have no knowledge of openvms or powerhouse, however, I have plenty of experience with linux. I am able to connect to the server via SSH.
My question is are there any standard tools part of openvms I can use to help me verify the database back end? get an idea of how many tables, rows of data, etc.?

What is the goal? Move the (structured) data over for once and for all?
Move application functionality over?
Move ongoing changes over?
You'll have to dig into the system to figure out what it does, what it is build upon. Is there no design guide, an operations play book, backup procedures?
Most likely it is based on RMS (indexed) files. The data file would be names .IDX, .INX, or .DAT or some such, and there would be mane files, one par 'table/object'. The procedures would table about BACKUP, and CONVERT.
There would be a PowerHouse Dictionary from which metadata can be extracted with "qshow generate file" into .ph files.
You may want to look at Attunity (I work there), Connx or Easysoft to use those definitions to provide ODBC or JDBC access to the data from the outside.
Attunity has tools to bulk unload into any target DB with 'one click' once the data definitions are in place, but it is likely too costly for one-time use.
Still, if the alternative is two months of consulting/coding then a tool may be attractive.
If it is based on RDB, then you would see a few .RDB files, .RBR and .AIJ files.
There would be .SQL script morcels and operations via "RMU"
Like any other database it would include metadata and has native option for remote ODBC, or (Oracle) OCI access
Hope this helps some,
Hein.

Related

Progress DB: backup restore and query individual tables

Here is the use-case: we need to backup some of the tables from a client server, copy it to our servers, restore it, then running some queries using ODBC.
I managed to do this process for the entire database by using probkup for backup, prorest for restore and proserve to make it accessible for SQL queries.
However, some of the databases are big (> 8GB), so we are looking for a solution to do the backup for only the tables we need. I didn't find anything with the documentation of probkup how this can be done.
Progress only supports full database backups.
To get the effect that you are looking for you could dump (export) the tables that you want and then load them into an empty database.
"proutil dump" and "proutil load" are where you want to start digging.
The details will vary depending on exactly what you want to do and what resources and capabilities you have available to you.
Another option would be to replicate the tables in question to a partial database. Progress has a product called "pro2" that can help with that. It is usually pointed at SQL targets but you could also point it at a Progress database.
Or, if you have programming skills, you could put together a solution using replication triggers (under the covers that's what pro2 does...)
probkup and prorest are block-level programs and can't do a backup or restore by table.
To do what you're asking for, you'll need to do a dump the data from the source db's tables and then load it into the target db.
If your object is simply to maintain a copy of the DB, you might also try incremental backups. Depending upon your situation, that might speed things up a bit.
Other options include various forms of DB replication, which allow you to keep real- or near-real-time copies of your database.
OpenEdge Replication. With the correct license, you can do query-only access on the replication target, which is good for reporting and analysis.
Third-party replication products. These can be more flexible in terms of both target DBs and limiting the tables to be replicated.
Home-grown replication (by copying and applying AI files). This is not terribly complicated, but you have to factor the cost of doing the work and maintaining the system. There are some scripts out there that can get you started.
Or, as Tom said, you can get clever with replication via triggers.

What are ways to transfer tables from Oracle to SQL Server

I've been searching the internet for this question:
What are ways to transfer data and tables on a daily basis from an Oracle's Hyperion to SQL Server 2000?
I am an intern at a company and trying to figure out possible ways to do this. Any help or point in the right direction is greatly appreciated
This is going to depend a lot on specifics. Here are just a few possible solutions:
DTS
DTS is packaged with SQL 2000 and is made for this kind of a task. If written correctly, your DTS package can have good error-handling and be rerunnable/reusable.
SSIS
SSIS is actually packaged with SQL 2005 and above, but you can connect it to other databases. It's basically a better version of DTS. (technically it's radically different than DTS, but has a lot of the same functionality)
Linked Servers
From SQL 2000 you should be able to connect directly to your Oracle database as a linked server. In the pros column this kind of direct access can be easy to work with if you don't have any other technical skills such as DTS or SSIS, but it can be complex to get the initial set-up right and there may be security concerns/issues.
Build Your Own
Depending on what other technologies you use you can build your own application to do the ETL (Extract/Transform/Load, which is what you're doing). This could be in .NET, Java, etc. In the pros column you can use something with which you're familiar but there's a big downside here in that most of the low level type of work is already out there in tools like DTS/SSIS, so why reinvent the wheel?
BCP
You can simply extract the data from Oracle as .csv files (or some other format) and then import them back in using SQL Server's Bulk Copy Process. This can be fast, but there aren't many bells and whistles to go with this. If this is a one-time thing with just a few tables though then this is probably the easiest and fastest way to do it.
Third Party Applications
There are a slew of ETL applications already written out there (Data Import, Data Slave, etc.). They will usually provide wizards and one-click solutions (maybe a few more than one click), but they are also going to cost a bit of extra money.
EDIT:
Given your latest comment, I would probably go with a DTS package that's scheduled in SQL Agent to run daily. You can add in error-handling and have the system email/text/call someone if there's ever an issue (or do positive case reporting - ie. send a message when it's successful so that someone knows that there's a problem if they don't get a message each day.
In our company we use ADO.Net for the same task.
We created a source to Oracle , taking all data and then creating it in SQL server
You could write DTS packages to copy the data, and schedule them to run within Sql Server Agent.
See DTS Overview for information on DTS packages.
Here's a tutorial on creating a DTS package: Creating DTS Packages With SQL Server 2000
Oracle Hyperion is a suite of products, largely unrelated to Oracle's database product. I expect you are referring to a product such as Hyperion Financial Management or Hyperion Strategic Finance. These products have APIs that can be consumed using COM Interop or web services. The data can be extracted from the internal multidimensional database by analyzing the database metadata, creating dimension trees, and then using the information to create selections, that represent subcubes within the database; allowing you to get or set cell data.
I don't know what your level of knowledge of multidimensional databases is, but unless it is substantial you may find the task pretty hard. You also need to get a handle on the particular product API.
My company specializes in these kinds of activities, and we have components for this kind of thing. Drop me a line on my blog if you need further advice.
danielvaughan.org
Cheers,
Daniel
I don't know anything about Hyperion, but SQL Server 2000 is very old and may not have a driver to be able to pull data from Hyperion if the version of that is newer than the year 2000. You may need to look to see if there is a way to push the data from Hyperion rather than pull it into SQL Server 2000. One way i have done this is the past is to create pipe delimited text file from the data base that orginally has the data and palce it in a processing directory. I do know that DTS will process a pipe-delimited text file. So if you can't find a driver to process this data directly, consider if you can push it out to file and then process. You wil have to schedule a time gap between the job on Hyperion that creates the file and the DTS package job. But if you are only doing it once a day, that's prbably not a problme.

How to Sql Backup or Mirror database?

We are not hosting our databases. Right now, One person is manually creating a .bak file from the production server. The .bak then copied to each developer's pc. Is there a better apporach that would make this process easier? I am working on build project right now for our team, I am thinking about adding the .bak file into SVN so each person has the correct local version? I had tried to generate a sql script but, it has no data just the schema?
Developers can't share a single dev database?
Adding the .bak file to SVN sounds bad. That's going to keep every version of it forever - you'd be better off (in most cases) leaving it on a network share visible by all developers and letting them copy it down.
You might want to use SSIS packages to let developers make ad hoc copies of production.
You might also be interested in the Data Publishing Wizard, an open source project that lets you script databases with their data. But I'd lean towards SSIS if developers need their own copy of the database.
If the production server has online connectivity to your site you can try the method called "log shipping".
This entails creating a baseline copy of your production database, then taking chunks of the transaction log written on the production server and applying the (actions contained in) the log chunks to your copy. This ensures that after a certain delay your backup database will be in the same state as the production database.
Detailed information can be found here: http://msdn.microsoft.com/en-us/library/ms187103.aspx
As you mentioned SQL 2008 among the tags: as far as I remember SQL2008 has some kind of automatism to set this up.
You can create a schedule back up and restore
You don't have to developer PC for backup, coz. SQL server has it's own back up folder you can use it.
Also you can have restore script generated for each PC from one location, if the developer want to hold the database in their local system.
RESTORE DATABASE [xxxdb] FROM
DISK = N'\xxxx\xxx\xxx\xxxx.bak'
WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 10
GO
Check out SQL Source Control from RedGate, it can be used to keep schema and data in sync with a source control repository (docs say supports SVN). It supports the datbase on a centrally deployed server, or many developer machines as well.
Scripting out the data probably won't be a fun time for everyone depending on how much data there is, but you can also select which tables you're going to do (like lookups) and populate any larger business entity tables using SSIS (or data generator for testing).

Tools to work with stored procedures in Oracle, in a team?

What tools do you use to develop Oracle stored procedures, in a team :
To automatically "lock" the current procedure you are working with, so nobody else in the team can make changes to it until you are finished.
To automatically send the changes you make in the stored procedure, in an Oracle database, to a Subversion, CVS, ... repository
Thanks!
I'm not sure if the original poster is still monitoring this, but I'll ask the question anyways.
The original post requested to be able to:
To automatically "lock" the current
procedure you are working with, so
nobody else in the team can make
changes to it until you are finished.
Perhaps the problem here is one of development paradigm more than the inability of a product to "lock" the stored proc. Whenever I hear "I want to lock this so noone else changes it" I immediately get the feeling that people are sharing a schema and everyone is developing in the same space.
If this is the case, why not simply let everyone have their own schema with a copy of the data model? I mean seriously folks, it doesn't "cost" anything to create another schema. That way, each developer can make changes until they're blue in the face without affecting anyone else.
Another trick I've used in the past (on small teams) when it wasn't feasible to let every developer have their own copy of the data because of size, was to have a master schema with all the tables and code in it, with public synonyms pointing to it all. Then, if the developer wants to work on a stored proc, he simply creates it in his schema. That way Oracle name resolution finds that one first instead of the copy in the master schema, allowing them to test their code without affecting anyone else. This does have it's drawbacks, but this was a very specific case where we could live with them. I would NEVER implement something like this in production obviously.
As for the second requirement:
To automatically send the changes you
make in the stored procedure, in an
Oracle database, to a Subversion, CVS,
... repository
I'd be surprised to find tools out there smart enough to do this (perhaps an opportunity :). It would have to connect to your db, query the data dictionary (USER_SOURCE) and pull out the associated text. A tall order for source control systems where are almost universally file based.
Oracle's new SQL Developer has version control built-in.
Here is a link to the product.
http://www.oracle.com/technology/products/database/sql_developer/files/what_is_sqldev.html
http://www.oracle.com/technology/products/database/sql_developer/images/what_version.png http://www.oracle.com/technology/products/database/sql_developer/images/what_version.png
Treat PL/SQL as usual code : store it in files, and manage these files with your revision control tool and your internal procedures.
If you do not already have a revision control tool, then write your requirements down and pick one up. A lot of people it seems use Subversion, associated to TortoiseSVN as a client on Windows (I do).
The thing is : use your tool as is recommended, and adapt your procedures accordingly. For instance, Subversion uses a copy-modify-merge model by default, as opposed to a lock-modify-unlock model which you seem to favor.
In my case, I like to use TortoiseSVN, as stated above. And as is usual with this tool :
I never lock any files. This is very manageable with small teams, and it requires ahead planning on larger ones, which is always a good thing IMHO.
I send my changes manually back to the server, because ... I don't think there's another way with Subversion (plus, internal procedures forbid a commit without a message, which is also a good thing IMHO).
And whatever your choice, I recommend reading this post (and related ones) about database versioning.
A relatively simple (if slightly old-fashioned) solution might be to use a "locking" rather than "merge" mode version control system.... Subversion or CVS generally use a "merge" mode (although I believe Subversion can be made to "lock" files?)
"Locking" mode version control systems do have their own drawbacks of course.....
The only way I can think of doing in in Oracle might be some of of BEFORE CREATE TRIGGER, maybe referencing a table to look-up who can run a package in. Sounds a bit nasty though?
Using Source Control for Oracle you get a lot of what you're looking for.
Stored procedures (as well as packages, functions, tables etc.) can be locked manually using the interface, not automatically, but this does prevent others making changes.
The new SQL to create the object can then be checked into SVN or TFS (no CVS support unfortunately).
The tool is not free but has a free 28-day trial.
Using Oracle SQL Developer 1.5, you can easily create and manage connections to CVS or Subversion. To create a CVS connection (for example), click Versioning -> CVS -> Check out Module. You will run through a wizard to create the connection (host, username, etc), then you can check your procedures/functions out and in as normal.
Integration with CVS is also provided in Toad.
You may also want to look at Aqua Data Studio. They have built in SVN as well and is a great Stored Proc editor.
After searching for a tool to handle version control for Oracle objects with no luck we created the following (not perfect but suitable) solution:
Using dbms_metadata package we create the metadata dump of our Oracle server. We create one file per object, hence the result is not one huge file but a bunch of files. For recognizing deleted object we delete all the files before creating the dump again.
We copy all the files from the server to the client computer.
Using Netbeans we recognize the changes, and commit the changes to the CVS server (or check the diffs...). Any CVS-handler software would work here, but we were already using Netbeans for other purposes. And Netbeans also allows to create an ant task for calling the Oracle process mentioned in step 1, copying the files mention in step 2...
Here is the most imporant query for step 1:
SELECT object_type, object_name,
dbms_metadata.get_ddl(object_type, object_name) object_ddl FROM user_objects
WHERE OBJECT_TYPE in ('INDEX', 'TRIGGER', 'TABLE', 'VIEW', 'PACKAGE',
'FUNCTION', 'PROCEDURE', 'SYNONYM', 'TYPE')
ORDER BY OBJECT_TYPE, OBJECT_NAME
One file per object approach helps to identify the changes. If I add a field to table TTTT (not a real table name of course) then only TABLE_TTTT.SQL file will be modified.
Both step 1 and step 3 are slow processes. (several minutes for a few thousand of files)
Toad also does this without requiring CVS / SVN.

What is the easiest way to copy a database from one Informix IDS 11 Server to another

The source database is quite large. The target database doesn't grow automatically. They are on different machines.
I'm coming from a MS SQL Server, MySQL background and IDS11 seems overly complex (I am sure, with good reason).
One way to move data from one server to another is to backup the database using the dbexport command.
Then after copying the backup files to the destination server run the dbimport command.
To create a new database you need to create the DBSpace for the new database using the onmonitor tool, at this point you could use the existing files from the other server.
You will then need to create the database on the destination server using the dbaccess tool. The dbaccess tool has a database option that allows you to create a database. When creating the database you specify what DBSpace to use.
The source database may be made up of many chunks which you will also need to copy and attach to the new database.
The easiest way is dbexport/dbimport, as others have mentioned.
The fastest way is using onpload, the High Performance Loader. If you have lots of data, but not a ridiculous number of tables, this is definitely worth pursuing. There are some bits and pieces on the IIUG site that may be of assistance in scripting the HPL to generate all the config you'll need.
You have a few choices.
dbexport/dbimport
onunload/onload
HPL (high performance loader) options.
I have personally used onunload/onload and dbexport/dbimport. I have not used HPL. I'm using IDS 10.
onunload/onload IBM docs
Back up the raw database to disk or tape in page size chunks
faster (especially if you go to disk)
Issues if the the database servers are on different operating systems or hardware or if they just have different page sizes.
dbexport/dbimport IBM docs
backup the database in delimited ascii files
writes an ascii schema of the database including all users, tables, views, indexes, etc. Everything about the structure of the database into one huge plain text file.
separate plain text files for each table of the database as well
not so fast
issues on dbimport on any table that has bad data, any view with incorrect syntax, etc. (This can be a good thing, an opportunity to identify and clean)
DO NOT LEAVE THIS TAPE ON THE FRONT SEAT OF YOUR CAR WHEN YOU RUN INTO THE STORE FOR AN ICE CREAM (or you'll be on the news). Also read ... Not a very secure way to be moving data around. :)
Limitation: Requires exclusive access to the source database.
Here is a good place to start in the docs --> Migration of Data Between Database Servers
have you used the export tool ? There used to be a way if you first put the db's into quiescent mode and then you could actually copy the DBSpaces across (dbspaces tool I think... its been a few years now).
Because with informix you used to be able to specify the DBSpaces(s) to used for the table (maybe even in the alter table ?).
Check - dbaccess tool - there is an export command.
Put the DB's into quiesent mode or shut down, copy the dbspaces and then attach table telling it to point to the new dbspaces file. (the dbspaces tool could be worth while looking at.. I have manuals around here. they are 9.2, but it shouldn't have changed too much).
If both the machines use the same version of IDS then another option would be to use ontape to take a backup on one machine one machine and restore on another. You can use the STDIO option and then just stream the backup onto the other machine where the restore could just restore from the STDIO.
From the "Data Replication for High Availability and Distribution" redbook:
ontape -s -L 0 -F | rsh secondary_server "ontape –p"
You could also create a passwordless ssh connection b/w the hosts and transfer in a more secure way.