I have an OpenEdge Progress v10.1C linux server with a bunch of data on it. We're migrating this data to an SQL server. I just need to get the data off of the server somehow, be it a bunch of CSVs of tables or an sql file or whatever - I just need the raw data.
I have no experience with this server and I can't seem to figure out how to even connect to it or anything. I just know where the data directory is and that I have a bunch of files in /usr/dlc/bin which are for management (like _progres, pro, ...)
I would really appreciate any help extracting this data!
At a command line execute $DLC/bin/showcfg -- this will reveal what you have available for licenses which will have an impact on what options you have to export the data.
If your license allows, to get simple text files you can use the "data dictionary dump". To run that start a session from a command line like this:
mpro dbName -p dict.p
If your licenses are good that will bring up the data dictionary. Navigate to Admin -> Dump Data and Definitions -> Table Contents if you get that far then go ahead and dump what you need.
If the database is large this may take a while. Maybe even a long while.
If you cannot navigate to that point then then you do not have an appropriate license. In that case if you have SQL installed and configured then the simplest thing may be to just extract the data with that. But since you are asking this question that seems unlikely -- none the less... type "ps -ef | grep sql" and see if anything shows up. If it does you should be able to connect an ODBC client.
If all else fails you can try to get someone to write you a custom extract program. That will need to be done by someone with a compiler license and they will need to provide you with r-code. That sort of thing is usually provided as a commercial service.
Related
I'm having a bit of trouble with an old website that I have inherited from someone.
It throw errors regarding a connection pool being maxed out. When it happens the website would simply then just load the HTML and nothing from the database. When it is left for a while it would then work fine, or when I would recycle the IIS application pool in Plesk.
I have done a lot of reading and research but I can't quite work it out still.
The first thing I read was to look for was any code where the database connection was not closed after it had retrieved the information. I haven't found anything like this so far.
The next thing was I found a stored procedure sp_who2 which I was led to believe would give me the open connection but I'm a little confused as to if this is what I'm actually looking at.
When running sp_who2 I get the below.
Is this an open connection? Or is it simply my connection that is currently looking at the database through SQL Server Management Studio?
This database is currently on a shared hosting platform so I don't have the access needed to run some of the other commands that I found.
Ideally I will move the website off a shared host, but I'd like to find out the reason for this before I do. What I'm hoping to find is where the code needs to be adjusted to make it work.
Is this an open connection? Or is it simply my connection that is currently looking at the database through SQL Server Management Studio?
It's both. Your connection is coming from SSMS, and it is an open connection.
There are ways to get more detailed infomration than this. You can, for example, use sys.dm_exec_connections, along with other DMV's like sys.dm_exec_sessions, and construct a query which tells you a lot of information.
But rather than write a query like that yourself, I suggest you download Adam Machanic's excellent sp_whoisactive. It's just a stored procedure you create, which contains a query to pull lots of useful infomration from the system metadata, with the ability to provide options to customise the output. This procedure is, I would be confident to say, the "default" procedure that everyone eventually uses for this kind of query. I might write my own more limited query against the DMV's from time to time, but most of the time, sp_whoisactive can give you everything you want.
One of the parameters, for example, is #show_sleeping_spids. This will show you connections even if they're not actually running any query. Which is sort of funny given that the procedure name is "who is active", but the usefulness is clear. You would want to execute this procedure like so: exec sp_whoisactive #show_sleeping_spids = 1, and perhaps add other paramters besides. It has powerful filtering options too, but if you don't typically have a lot of connections, you probably can just grab the entire output and eyeball it for relevant info.
If you can't run this because of the permissions available to you by shared hosting, then your options are severely limited. If you are only getting your own session from sp_who2, but you know other stuff is running, then you don't have view server state permission. In such cases, sp_who2 only outputs your own session (because you are allowed to see information about your own session).
Disclaimer: I am somewhat of a n00b when it comes to database programming, so bear with me.
I've been attempting to batch process a rather large amount (~20 gb) of data all contained in .MDF SQL database files. The files contain meteorological data obtained through weather balloons, with each table consisting of ~1 second observations of winds, pressure, height, temps, etc, and are created with our radiosonde tracking software on an unnetworked Windows machine. It is possible (and quite easy) to load the files using the associated software and export the tables as an ASCII text file...however, this process involves manually loading each one. As I'm performing a study that requires as many soundings as possible (we have over 2000), doing this process over and over for several years of twice-daily observations is extremely time-prohibitive.
I've been taking the files off of the computer and putting them on my laptop running Linux Mint, and consider myself to be fluent with Perl...I do most of my data analysis with Perl scripts. That said, I've had the darndest time trying to get into the database files!
I've tried to connect to one of the files using the DBI package using variants on
$dbh = DBI->connect("DBI:ODBC:$filename") or die "blahblahblah";
I have unixODBC installed and configured, have downloaded "libmyodbc.so" and "libodbcmyS.so", and keep getting the error
DBI connect('','',...) failed: [unixODBC][Driver Manager]Data source name not found, and no default driver specified (SQL-IM002) at dumpsql.pl line 6.
I've tried remedying this a number of ways over the past couple days, and I won't post them here for the sake of brevity. My odbcinst.ini file is as follows:
[MySQL]
Description = ODBC for MySQL
Driver = /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
Setup = /usr/sib/x86_64-linux-gnu/odbc/libodbcmyS.so
FileUsage = 1
I'm seriously confused. I THINK I'm doing everything that various online tutorials are suggesting, but everyone else is connecting to servers and these files are all local and in the same directory! Could anyone attempt to point me in the right direction? All I want is to calculate meteorological values using vertical sounding data! Am I missing something totally obvious?
Any help would be greatly appreciated!
It seems the original database server was a Microsoft SQL Server (MDF files). I am afraid these files alone are useless on a Linux machine. You need a Microsoft SQL Server on a Windows machine to get access to the contained data.
You described that you are able to attach a MDF file on a SQL server manually and then you can export the needed data as text files. Try to automate that. I'm not a MS SQL Server expert but it should be possible.
E.g. here is a tutorial to attach and detach a MDF file via T-SQL. So my approach would be to write a script which iterates over the 2000 MDF files and attach each to the SQL server. Then execute a query to export your data and then detach the MDF.
I have spent quite a bit of time scouring the overflow forums and using other resources to try and figure this out without success. My apologies if this has been answered elsewhere. But, to my knowledge it hasn't
Here is a little background. I have some experience with SQL, Visual basic and Auto Hot Key (awesome program). Currently I am trying to learn more than the basic stuff I know about SQL (little rusty too).
For my previous and current job I work(ed) a lot with IBM's iSeries (or 'Mainframe' as some veterans like to call it.) Specifically it's version: IBM DB2 PE 9.7 FP5 SRM R1 on the DB2 i5/OS.
As you may know, iSeries related emulators have a feature that allows its users to record macros for later playback. This feature of course has its limitations. I have written some from scratch in Visual Basic that are more flexible in the operations that they can perform. However, you can only do so much with VB.
Regardless of how you look at it, iSeries is slow IMO. In order to pass/retrieve information to/from its database/server its users are limited to the speed of the program (among other things). VB macros are subject to smart pauses/timeouts as these are needed for scripts to work (like auto hot key)
iSeries does have a feature where you can query information, interact with libraries (schemas if I have that right?) and tables using the i5/OS Query manager. It also has an FTP feature (which has its uses I guess).
I have started playing around with Powershell and SSH/Qshell, I have read a lot of documenation in IBMs extensive support section on their site regarding iSeries and all things related.
So here are the questions:
Is qshell/SSH an efficient way to retrieve information from a database, or more specifically, the libraries that 'DB2 for iSeries' uses (.lib/.file extension files?)?
Wouldn't writing SQL scripts and executing them via SSH/Qshell be faster than using the iSeries emulator itself?
Is this only possible via port forwarding or tunneling?
How do I find out more about the server via SSH commands? I can navigate the servers directories pretty easily and view files that I otherwise cannot see within iSeries. Its database files are .lib/.file/.mbr file extensions that I cannot view using the 'cat' command. This would require using SQL commands I am assuming?
Do I need to log on as a root user in order to have read/write access privelages and perform anything SQL related?
As mentioned above I have played around with ssh/qshell but I am having difficulty bringing up the mysql/sql prompt to do anything (I am using openSSH 4.7p1 / a unix CLI?) which I believe changes from $ to > when you successfully do this. I have been able to do this but am unable to execute any SQL related commands so I am not sure what I am doing wrong.
Below is an example of me logging into the server over ssh via powershell and trying to execute something SQL related:
PS H:> ssh -1 myusername#blabla.something.com "mysql -u myusername -e 'show tables;'"
enter password: password
connected
Could not chdir to home directory /home/myusername: A file or directory in the path name does not exist
bsh: mysql: execute permission denied
bsh: cannot execute
Connection closed to blabla.something.com
This is an example of just logging in over ssh as normal:
ssh -1 myusername#blabla.something.com
Could not chdir to home directory /home/myusername: A file or directory in the path name does not exist
$ Mysql
mysql: execute permission denied
$ "mysql
'show databases;'"
mysql
'show databases;'" not found
Any insight as to what I am doing wrong or feedback on my questions is greatly appreciated. I know there are alternatives to what I am trying to do using PuTTY (not really an option at work), ODBC drivers (can't really download an IBM iSeries fixpack to get the drivers or repair the installation of iSeries as I can't anyways I don't have Windows admin privs)
UPDATE
First, I want to thank you all for your comments and your insight.
Warren:
Thank you, for you insight. IBM DB2 PE 9.7 FP5 SRM R1 is indeed DB2 LUW, or "COBRA". This particular emulator is an adaption of IBM's 3270 emulator. This knock off brought to you by attachmate (they added a couple bells and whistles and no longer offer support, ODBC drivers etc). Within this version, I have the option to utilize IBM DB2 Query Manager for i5/OS. I would agree that the server is partitioned (I think?) as I have been able to call different prompts like DB2/SQLJ/JAVA, just haven't been too successful with using them lol. Perhaps my attempts below will reveal a little more.
James:
Thank you for your input, I never thought to do this from powershell directly. As for the redbook, have it! I'm still reading it though. This may be a little more advanced than where I currently stand with powershell and ADO.NET. I will need to take a more in depth look at your coding example to gain a better understanding of it.
Buck:
I appreciate your point by point feedback. Not sure what your setup is, but IBM i in general (especially anything after i5) is insanely fast. The company I work for.. or at least the immediate people I work with, know very little in regards to i/vba/qsh/etc. To put it politely. So, much of what I have learned has been through reading and applying what I learn. This, actually, is only my second post. On any website, ever. I don't like to just post away because I can't figure something out in a minute. I'm stubborn and will spend hours if I have to make a script work.
As for STRSQL, the equivalent I use is STRQRY. From reading IBM's i5/OS Query Manager PDF (300 pgs -_-). I would agree that this is a very powerful tool. The only drawback is 1. you have to make a form that will format the query and 2. You have to create the Query using SQL . This is no big deal whatsoever. Unfortunately, there are hundreds of schemas, and sometimes thousands of .FILE extensions within any given LIB (among other extension types). Luckily I have narrowed this down to the primary LIB that I access most. Unfortunately, there are something like 4000+ files extensions within this library and files within some of those files (︻╦╤─ ^ _ ^). Going through all of them is a little time consuming to say the least.
There are some "macros" that perform a query in batch and then prints the data to a formatted file onto the server (which takes erm... like 30 mins for 500-600 pages). Is there any way I could view the parameters of this batch run? "Work with queries/query forms?"
The primary reason I am more so interested in using QSH>SQL to perform a query and navigate a database is due to the fact I would like to improve my SQL skillz (beginner) but I would like to be able to apply this knowledge to other environments in the future.
God, I write too much s***
My recent efforts: I am sorry. My formatting sucks
After I log in via ssh this is what I attempted:
$ db2 "select * from sysibm.sysdummy1"
db2: cannot execute
$ cd /usr/bin
$ ls
ajar
qsh
sqlj
db2
db2profc
dataq
java
javadoc
javah .....
$ db2 "select from qsys.lib
++++++CLI ERROR++++++
SQLSTATE 42704
LIB in QSYS type *file not found // there are tons of .file extensions within btw
$ cd ..
$ cd qsys.lib
$ ls
A crap load of .lib .menu .file (*file?) extensions
$ db2 "select * from 1234abcd.lib"
++++++CLI ERROR++++++
SQLSTATE 42601
NATIVE ERROR CODE: -104
Token not valid bla bla VALID TOKENS: FOR SKIP FETCH ORDER GET // success! well still fail
Questions
So my problem now is I just need to use valid tokens. Does anybody know of a good example? (im going to attempt on my own still)
For the life of me I cannot replicate the process above, to where I at least get a cli error / SQLSTATE. Everything now returns:
"db2: you should just give up lol"
I try to log everything I do, guess I missed something important ay? Somewhere between...
$ cd /usr/bin
$ ls
and
$ db2 "select * from iforget.lib"
Something happened to where I could at least execute SQL statements. Now, nothing.
db2 : t(ಠ益ಠt)
Its late. So tired
Any feedback = Appreciation
The qshell (and DB/2) equivalent to the mysql cli command is db2:
db2 "select * from sysibm.sysdummy1"
If you have the IBM Access .NET data provider installed you can query directly from PowerShell:
# Assembly name from \\HKCR\Installer\Assemblies\Global
$an = 'IBM.Data.DB2.iSeries,Version="12.0.0.0",PublicKeyToken="9CDB2EBFB1F93A26",Culture="neutral"'
# Connection string
$cs = 'Data-Source=10.0.0.50;UserID=QPGMR;Password=****'
Add-Type -AssemblyName $an
$cn = New-Object IBM.Data.DB2.iSeries.iDB2Connection($cs)
$cn.Open()
$cmd = $cn.CreateCommand()
$cmd.CommandText = "SELECT * FROM SYSIBM.SYSDUMMY1"
$da = New-Object IBM.Data.DB2.iSeries.iDB2DataAdapter($cmd)
$ds = New-Object System.Data.DataSet
$cnt = $da.Fill($ds)
Write-Host "$cnt records selected."
$cmd.Dispose()
$cn.Close()
foreach ($dt in $ds.Tables) {
$dv = New-Object System.Data.DataView($dt)
$dv | Format-Table -AutoSize
$dv.Dispose()
}
$ds.Dispose()
For more information see the IBM Redbook Integrating DB2 Universal Database for iSeries with Microsoft ADO .NET .
1) Is qshell/SSH an efficient way to retrieve information from a
database, or more specifically, the libraries that 'DB2 for iSeries'
uses (.lib/.file extension files?)?
The synopsis was helpful; it appears you are interested in doing ad hoc queries rather than writing an end-user application. qshell is just as efficient as the IBM i command STRSQL in terms of issuing SQL statements and seeing the results immediately in an emulator window. I just queried a 2 million row table on a column without an index and got the first page of results in .23 seconds.
2) Wouldn't writing SQL scripts and executing them via SSH/Qshell be
faster than using the iSeries emulator itself?
It is difficult to understand what faster might mean. It seems you are using the emulator solely to play back macros; this is not particularly efficient. Typically, one would use the emulator to get to a command line, issue the STRSQL command and run ad hoc SQL queries from there. That is very fast in terms of getting the results to display.
3) Is this only possible via port forwarding or tunneling?
This depends entirely on how the network was configured.
4) How do I find out more about the server via SSH commands? I can
navigate the servers directories pretty easily and view files that I
otherwise cannot see within iSeries. Its database files are
.lib/.file/.mbr file extensions that I cannot view using the 'cat'
command. This would require using SQL commands I am assuming?
Whoever hired you must have given you some basic information to get you started, so you probably already know the libraries (schemas) and files (tables) which make up the production environment. So I'm assuming that you aren't asking about SQL catalogs like SYSCOLUMNS and SYSTABLES but rather things which are more specific to IBM i. The reference material for IBM i is located in the Infocenter.
5) Do I need to log on as a root user in order to have read/write
access privelages and perform anything SQL related?
No system demands root privileges in order to perform read/write access. Your employer will need to provide you with a user profile having privileges to the tables you want to access.
If you need a light weight terminal emulator, look at the tn5250 project on Sourceforge. You won't need Windows admin privileges to install or run it.
I'm in a database class and the teacher wants us to connect through ssh to an oracle database setup on a school server and it's been extremely frustrating. She wants us to turn in an sql file that will create all the necessary table, insert tuples, run certain select commands which I've found to be very hard to get an sql file with everything after i get everything right and I haven't found a way to test the sql file against the server and I don't think I have permission to drop tables anyway. Anyway my question is there a way I can take an sql file with create table and insert commands to convert it to something like an access .mba database or something local i can mess around with? and help would be greatly appreciated didn't find much help on google.
You seem to be confusing terminology a bit; SQL*Plus is a client application, and the database is a shared server resource. You want to create schema objects from an SQL file, I think. But anyway...
There's a very useful online resource for experimenting with bits of SQL in various flavours, SQL Fiddle. Technically not 'offline' of course, but I'm taking that to mean off your school's network, not necessarily completely isolated. You can create tables and run your inserts in the schema panel, and then run queries against that. Make sure you pick the right database product from the drop-down menu so you're using syntax that is valid for your class. You'll see a lot of answers here with links to demonstration fiddles.
That's great for a lot of things but if you want something a bit more robust and scalable, and entirely offline, you can install VirtualBox and get a pre-built developer VM image which gives you a ready-to-go Linux environment with a database installed and running. You can run whatever you want against that, you have SQL*Plus and SQL Developer available, and you can connect to the DB from your host machine if you want to. You can create and test your scripts against that, and in a format that will be closer to what you have to hand in than you'd use with SQL Fiddle.
This is much less work than installing the Oracle software yourself and learning how to create and manage the database, which I'm guessing is a bit more advanced than you need at the moment, based purely on the kinds of thing your question suggests you're dong at the moment. I think you'd learn a lot from the installation and build process, but I'd get comfortable with Oracle first, and maybe practice in a VM first as it's so much easier to trash it and start again when you mess something up.
If I wanted 'something local I can mess around with', I would go for a VM image. Mo posted a walkthrough of the VM setup as a comment to a previous similar answer, which you might find helpful.
"Something local I can mess around with" in terms of Oracle Database is Oracle Database 11g Express Edition. It's free and can be downloaded from oracle.com. You certainly can test sql-files run through sqlplus on Oracle Database XE.
To get the MS Access (GUI) feeling, download SQL Developer. It's free.
Best of luck!
Bjarte
I was wondering if there is a way to automatically append to a script file all the changes I am making to my columns, tables, relationships etc...
The thing is I am doing a lot of different changes on a TEST db and the idea will be to apply this change script when I move the test db to production... hence keeping production data but applying all schema and object changes.
Is there an easy way to do this? Can it also migrate database diagram changes?
I have seen how you can create a change script each time I do a change but this means I have to copy and paste into a master file. Actually pretty easy!
I was just wondering if I was missing something?
Do not make changes to the test server using the UI. Write scripts and keep them under source control. You can test your scripts starting from backups of the live data and you can tune yoru scripts untill they achieve the desired result. Then you can check in the scripts for reference and later apply them on the live server. See this article Version Control and Your Database.
BTW, check out the SSMS toolpack, I think it may do what you want (I'm not sure). My advice stand none the less: version your schema, use explicitly created/saved scripts, use source control.
There's no way to directly generate a "delta" script in SSMS.
However, if every time you publish changes, you script out the entire database, including data, to SQL using the SQL Server Database Publishing Wizard you should be able to extract diffs between the versions and get your deltas that way.
If money is no object, you can purchase Visual Studio Team System Database Architect edition and use its fantastic database comparison tools to generate and version control exactly the diffs you want.
Try using TableDiff , that came with SQL Server 2005.
SQL Server 2005 TableDiff Utility
tablediff Utility
We have the process where when a developer gets done with a change, they then script it out and check it into Subversion. In Subversion we have a folder for Tables, Stored Procs, Data, etc. They script it out so it is repeatable (i.e. don’t insert the new data if it is already there.) This is important to do anyway so you keep the history of changes for a given object in the database.
In the past, we would just enter each of the files that we wanted scripted out into a text file (i.e. FileListV102.txt). When we were ready to make a release we would do “get latest” on all of the files (from VSS back then.) We then had a simple utility that would read the “file list” file and open each of those files in turn concatenating them into an output file. That is pretty easy to code.
We outgrew that and now we have a release management tools (which can be found here and will be on sale mid September), that takes all of the files and creates a big SQL script file out of it. It does it in the order that you would expect based on the folder names – so files found in the "Tables" folder are done before those in the "Data" folder, etc.
Either way, once you are done you have a big SQL script file that you can then apply to a fresh copy of production and that is what you test against.
I know I'm way late to the party, but I just wanted to add that there are tens of third party products out there. Some are very good, some are very cheap or free, and some are a mixture. I listed 22 here:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
We have been using a relatively new software called Kal Admin.
It has Change Management feature and let distributing selected changes to other databases very easily. We used to do it by comparing two databases but it not satisfy our need for change tracking.
BTW Kal Admin has Metadata and data compare capabilities as well.