Debugging DB2 Session - sql

I'm just the unfortunate sole debugging an iSeries/RPG/SQL issue... (I'm not an RPG expert)
I have a program which contains temporary tables declared on DB2 on the iSeries. The temporary tables are declared in a session, so when I run the application and debug the RPG via a terminal on the iSeries (I presume this is the right terminology?) Anyway, I'm effectively in two different sessions.
The SQL I am looking at does something like this...
select blah from SESSION/#temp_table left join #real_table left join _to_many_other_tables
While I can query the "real table" fine, I can't see the contents of the SESSION table... so how would I go about querying a table in a different session ?? Presumably SESSION/#temp_table is something I could query by doing something like select * from 123123/#temp_table, but how would I know what the other session id/name/variable/access token looks like?

You can use STRSRVJOB to debug another job, but this probably won't let you query that job's QTEMP. Traditionally, midrange programmers debug jobs like these interactively. Sign on to a green screen session and CALL the program you want to debug.
Between STRDBG, STRISDB, the system debugger and the SEP facility found in RDi, there are many options to tackle the debugging problem. Additionally, the open source DBG400 might be something to look at.
EDIT:
The problem is a difficult one. It looks like this is a client/server type app. When writing an app like this, I usually put a debug switch into it so I can log what's happening for debugging purposes. Stored procedures on DB2 for i can be implemented purely in SQL, or they can call out to an HLL like RPG for the implementation.
If your SPs are external, say RPG, then add some code that will copy the temporary files to a real library on the system. Implementing it as a system() or QCMDEXC call is not very intrusive to the existing program code. You can turn it on and off with a data area - again, very unintrusive, but you can set the debug state from outside the application.
If your SPs are SQL, I'd modify them to write a duplicate of the temporary file in a real library. Say there's a CREATE TABLE QTEMP/TEMP001 WITH DATA ... Add a CREATE TABLE DEBUGLIB/TEMP001 WITH DATA ... If you wanted to, you could key this extra code on a special 'debug' user profile, although that might require some security changes on the IBM i side.

Related

How to track data changes when deploying a SQL SSDT project

We upgrade out databases using SSDT DACPAC deployments. Developers work on the projects in VisualStudio 2015, modifying the schema where needed.
Developers also add Pre and Post deployment scripts to the projects. Some of these scripts ensure that certain tables always have the expected data in them. Others add, move, or mutate data as part of the platform upgrade.
We need to improve the output generated during database deployments so that, after a deployment, we have a human readable list of any data that changed as part of the deployment.
I'm currently considering two approaches. But neither seems ideal. They are:
1) Manually add logging to all Pre and Post scripts in the project. This is certainly an option. But it is not ideal because it complicates the upgrade scripts and is also likely to sometimes be missed or incorrectly done by developers. Since the goal is to detect unexpected data changes that occurred as part of the deployment, this uncertainty is a real bummer. A generic solution is preferable.
2) Here's my best stab at a generic solution: As part of the deployment process, enable SQL Change Data Capture for all user tables in the database. Then, at the end of the deployment, collect all captured changes and disable CDC. I've actually got this working. But the process of enabling CDC on all tables in the database takes a few minutes (one of our databases has 775 tables, it takes about 3 minutes to enable CDC on all of them). This approach also just feels very... heavy?
My question is. Is there a better way to achieve my goal of reliably generating a report of data that was changed as part of the database deployment, given that the deployment runs arbitrary Pre and Post deployment scripts?
If there does not seem to be a better way, I would appreciate feedback on option #2. Am I crazy for considering this?
Some of these scripts ensure that certain tables always have the expected data in them
First of all PreDeploy/PostDeploy scripts are not validated. I prefer to use them as launching point only, and do the actual work inside stored procedures.
So instead of writing:
INSERT INTO dbo.tab1(id, col1, col2) VALUES (...,..., ...);
and inserting it on PostDeployscript, you could put:
EXEC dbo.Populate_Tab1;
and define stored procedure as:
CREATE PROCEDURE dbo.Populate_Tab1
AS
BEGIN
-- idempotent script, here by using MERGE
WITH src(id, col1, col2,...) AS (
SELECT 1, ..., ... UNION ALL
SELECT ...
)
MERGE dbo.tab1 trg
USING src
ON trg.id = src.id
WHEN MATCHED THEN UPDATE
...
WHEN NOT MATCHED BY SOURCE THEN DELETE
WHEN NOT MATCHED BY TARGET THEN INSERT
...
END
Key point: stored procedure must be idempotent.
This way you are always sure that the table contain desired input data, and stored procedure is validated.
Similar approach is described by Kamil Nowinski:
Script and deploy the data for database from SSDT project
Advantages:
The stored procedures are part of the database project.
They will be validated and compiled, thus you avoid potential errors with uncontrolled code
The changes to the SP appears in output script only when something changes.
It’s easier to review the script before a run as it doesn’t contain unnecessary code
In Post-deployment script – there is only one line of code which never is changed.

Unable to recompile package in Oracle 9i

I have a need to recompile a package in oracle 9i. But the session gets hung forever. When I checked in V$SESSION_WAIT, got to know that it is waiting on an event 'library cache pin'. Couldn't get a possible solution for 9i version. Is there anyway to find the session, that is executing my package and kill it?
Sure.
To find what sessions run a code which contains a name:
select s.*,sa.*
FROM v$session s
left join v$sqlarea sa
on s.sql_address=sa.address AND s.sql_hash_value=sa.hash_value
where sql_text like '%your_package_name_here%';
After this, you have the pid and serial so you can kill the session you need to kill. (The code above may return sessions that you do not need to kill, for example, it is finding itself :) )
Oracle offers no built-in easy path to do this.
If your application uses the DBMS_APPLICATION_INFO routines to register module usage you're in luck: you can simply query V$SESSION filtering on MODULE and/or ACTION. Alternatively, perhaps you have trace messages in your PL/SQL which you use?
Otherwise, you need to start trawling V$SQLTEXT (or another of the several views which show SQL) for calls containing the package name. Remember to make the search case-insensitive. This will give you a SQL_ID you can link to records in V$SESSION.
This will only work if your package is the primary object; that is, if it is the top of the call stack. That is one explanation for why the package is locked for so long. But perhaps your package is called from some other package: in that case you might not get a hit in V$SQLTEXT. So you will need to find the programs which call it, through ALL_DEPENDENCIES, and sift V$SQL_TEXT for them.
Yes, that does sound like a tedious job. And that is why it is a good idea to include some form of trace in long-running PL/SQL calls.

A process monitor based on periodic sql selects - does this exist or do I need to build it?

I need a simple tool to visualize the status of a series of processes (ETL processes, but that shouldn't matter). This process monitor need to be customizable with color coding for different status codes. The plan is to place the monitor on a big screen in the office making any faults instantly visible to everyone.
Today I can check the status of these processes by running an sql statement against the underlying tables in our oracle database. The output of these queries are the abovementioned status codes for each process. I'm imagining using these sql statements, run periodically (say, every minute or so), as an input to this monitor.
I've considered writing a simple web interface for doing this, but I'm thinking something like this should exist out there already. Anyone have any suggestions?
If just displaying on one workstation another option is SQL Developer Custom Reports. You would still have to fire up SQL Developer and start the report, but the custom reports have a setting so they can be refreshed at a specified interval (5-120 seconds). Depending on the 'richness' of the output you want you can either:
Create a simple Table report (style = Table)
Paste in one of the queries you already use as a starting point.
Create a PL/SQL Block that outputs HTML via DBMS_OUTPUT.PUT_LINE statements (Style = plsql-dbms_output)
Get creative as you like with formatting, colors, etc using HTML tags in the output. I have used this to create bar graphs to show progress of v$Long_Operations. A full description and screen shots are available here Creating a User Defined HTML Report
in SQL Developer.
If you just want to get some output moving you can forego SQL Developer, schedule a process to use your PL/SQL block to write HTML output to a file, and use a browser to display your generated output on your big screen. Alternately make the file available via a web server so others in your office can bring it up. Periodically regnerate the file and make sure to add a refresh meta tag to the page so browsers will periodically reload.
Oracle Application Express is probably the best tool for this.
I would say roll your own dashboard. Depends on your skillset, but I'd do a basic web app in Java (spring or some mvc framework, I'm not a web developer but I know enough to create a basic functional dashboard). Since you already know the SQL needed, it shouldn't be difficult to put together and you can modify as needed in future. Just keep it simple I would say (don't need a middleware or single sign-on or fancy views/charts).

Replace/Rename the Online Database

I have got a database of ms-sql server 2005 named mydb, which is being accessed by 7 applications from different location.
i have created its copy named mydbNew and tuned it by applying primary keys, indexes and changing queries in stored procedure.
now i wants to replace old db "mydb" from new db "mydbnew"
please tell me what is the best approach to do it. i though to do changes in web.config but all those application accessing it are not accessible to me, cant go for it.
please provide me experts opinion, so that i can do replace database in minimum time without affecting other db and all its application.
my meaning of saying replace old db by new db is that i wants to rename old db "mydb" to "mydbold" and then wants to remname my new db "mydbnew" to "mydb"
thanks
Your plan will work but it does carry a high risk, especially since I'm assuming this is a system that has users actively changing data, which means your copy won't have the same level of updated content in it unless you do a cut right before go-live. Your best bet is to migrate your changes carefully into the live system during a low traffic / maintenance period and extensively test it once your done. Prior to doing this, or the method you mentioned previously, backup everything.
All of the changes you described above can be made to an online database without the need to actually bring it down. However, some of those activities will change the way in which the data is affected by certain actions (changes to stored procs), that means that during the transition the behaviour of the system or systems may be unpredicatable and therefore you should either complete this update at a low point in day to day operations or take it down for a maintenance window.
Sql Server comes with a function to make a script file out of you database, you can also do this manually but clicking on the object you want to script and selecting the Script -> CREATE option. Depending on the amount of changes you have to make it may be worthwhile to script your whole new database (By clicking on the new database and selecting Tasks -> Generate Scripts... and selecting the items needed).
If you want to just script out the new things you need to add individually then you simply click on the object you want to script, select the Script <object> as -> then select DROP and CREATE to if you want to kill the original version (like replacing a stored proc) or select CREATE to if your adding new stuff.
Once you have all the things you want to add/update as a script your then ready to execute that against the new database. This would be the part where you backup everything. Once your happy everything is backed up and the system is in maintenance or a low traqffic period, you execute the script. There may be a few problems when you do this, you will need to fix these as quickly as possible (usually mostly just 'already exisits' errors, thats why drop and create scripts are good) and if anything goes really wrong restore your backups and try again (after figuring out what happened and how to fix it).
Make no mistake if you have a lot of changes to make this could be a long process, or it could take mere minutes, you just need to adapt if things go wrong and be sure to cover yourself with backups/extensive prayer. Good Luck!

Do you put your database static data into source-control ? How?

I'm using SQL-Server 2008 with Visual Studio Database Edition.
With this setup, keeping your schema in sync is very easy. Basically, there's a 'compare schema' tool that allow me to sync the schema of two databases and/or a database schema with a source-controlled creation script folder.
However, the situation is less clear when it comes to data, which can be of three different kind :
static data referenced in the code. typical example : my users can change their setting, and their configuration is stored on the server. However, there's a system-wide default value for each setting that is used in case the user didn't override it. The table containing those default settings grows as more options are added to the program. This means that when a new feature/option is checked in, the system-wide default setting is usually created in the database as well.
static data. eg. a product list populating a dropdown list. The program doesn't rely on the existence of a specific product in the list to work. This can be for example a list of unicode-encoded products that should be deployed in production when the new "unicode version" of the program is deployed.
other data, ie everything else (logs, user accounts, user data, etc.)
It seems obvious to me that my third item shouldn't be source-controlled (of course, it should be backuped on a regular basis)
But regarding the static data, I'm wondering what to do.
Should I append the insert scripts to the creation scripts? or maybe use separate scripts?
How do I (as a developer) warn the people doing the deployment that they should execute an insert statement ?
Should I differentiate my two kind of data? (the first one being usually created by a dev, while the second one is usually created by a non-dev)
How do you manage your DB static data ?
I have explained the technique I used in my blog Version Control and Your Database. I use database metadata (in this case SQL Server extended properties) to store the deployed application version. I only have scripts that upgrade from version to version. At startup the application reads the deployed version from the database metadata (lack of metadata is interpreted as version 0, ie. nothing is yet deployed). For each version there is an application function that upgrades to the next version. Usually this function runs an internal resource T-SQL script that does the upgrade, but it can be something else, like deploying a CLR assembly in the database.
There is no script to deploy the 'current' database schema. New installments iterate trough all intermediate versions, from version 1 to current version.
There are several advantages I enjoy by this technique:
Is easy for me to test a new version. I have a backup of the previous version, I apply the upgrade script, then I can revert to the previous version, change the script, try again, until I'm happy with the result.
My application can be deployed on top of any previous version. Various clients have various deployed version. When they upgrade, my application supports upgrade from any previous version.
There is no difference between a fresh install and an upgrade, it runs the same code, so I have fewer code paths to maintain and test.
There is no difference between DML and DDL changes (your original question). they all treated the same way, as script run to change from one version to next. When I need to make a change like you describe (change a default), I actually increase the schema version even if no other DDL change occurs. So at version 5.1 the default was 'foo', in 5.2 the default is 'bar' and that is the only difference between the two versions, and the 'upgrade' step is simply an UPDATE statement (followed of course by the version metadata change, ie. sp_updateextendedproperty).
All changes are in source control, part of the application sources (T-SQL scripts mostly).
I can easily get to any previous schema version, eg. to repro a customer complaint, simply by running the upgrade sequence and stopping at the version I'm interested in.
This approach saved my skin a number of times and I'm a true believer now. There is only one disadvantage: there is no obvious place to look in source to find 'what is the current form of procedure foo?'. Because the latest version of foo might have been upgraded 2 or 3 versions ago and it wasn't changed since, I need to look at the upgrade script for that version. I usually resort to just looking into the database and see what's in there, rather than searching through the upgrade scripts.
One final note: this is actually not my invention. This is modeled exactly after how SQL Server itself upgrades the database metadata (mssqlsystemresource).
If you are changing the static data (adding a new item to the table that is used to generate a drop-down list) then the insert should be in source control and deployed with the rest of the code. This is especially true if the insert is needed for the rest of the code to work. Otherwise, this step may be forgotten when the code is deployed and not so nice things happen.
If static data comes from another source (such as an import of the current airport codes in the US), then you may simply need to run an already documented import process. The import process itself should be in source control (we do this with all our SSIS packages), but the data need not be.
Here at Red Gate we recently added a feature to SQL Data Compare allowing static data to be stored as DML (one .sql file for each table) alongside the schema DDL that is currently supported by SQL Compare.
To understand how this works, here is a diagram that explains how it works.
The idea is that when you want to push changes to your target server, you do a comparison using the scripts as the source data source, which generates the necessary DML synchronization script to update the target. This means you don't have to assume that the target is being recreated from scratch each time. In time we hope to support static data in our upcoming SQL Source Control tool.
David Atkinson, Product Manager, Red Gate Software
I have come across this when developing CMS systems.
I went with appending the static data (the stuff referenced in the code) to the database creation scripts, then a separate script to add in any 'initialisation data' (like countries, initial product population etc).
For the first two steps, you could consider using an intermediate format (ie XML) for the data, then using a home grown tool, or something like CodeSmith to generate the SQL, and possible source files as well, if (for example) you have lookup tables which relate to enumerations used in the code - this helps enforce consistency.
This has another benefit that if the schema changes, in many cases you don't have to regenerate all your INSERT statements - you just change the tool.
I really like your distinction of the three types of data.
I agree for the third.
In our application, we try to avoid putting in the database the first, because it is duplicated (as it has to be in the code, the database is a duplicate). A secondary benefice is that we need no join or query to get access to that value from the code, so this speed things up.
If there is additional information that we would like to have in the database, for example if it can be changed per customer site, we separate the two. Other tables can still reference that data (either by index ex: 0, 1, 2, 3 or by code ex: EMPTY, SIMPLE, DOUBLE, ALL).
For the second, the scripts should be in source-control. We separate them from the structure (I think they typically are replaced as time goes, while the structures keeps adding deltas).
How do I (as a developer) warn the people doing the deployment that they should execute an insert statement ?
We have a complete procedure for that, and a readme coming with each release, with scripts and so on...
First off, I have never used Visual Studio Database Edition. You are blessed (or cursed) with whatever tools this utility gives you. Hopefully that includes a lot of flexibility.
I don't know that I'd make that big a difference between your type 1 and type 2 static data. Both are sets of data that are defined once and then never updated, barring subsequent releases and updates, right? In which case the main difference is in how or why the data is as it is, and not so much in how it is stored or initialized. (Unless the data is environment-specific, as in "A" for development, "B" for Production. This would be "type 4" data, and I shall cheerfully ignore it in this post, because I've solved it useing SQLCMD variables and they give me a headache.)
First, I would make a script to create all the tables in the database--preferably only one script, otherwise you can have a LOT of scripts lying about (and find-and-replace when renaming columns becomes very awkward). Then, I would make a script to populate the static data in these tables. This script could be appended to the end of the table script, or made it's own script, or even made one script per table, a good idea if you have hundreds or thousands of rows to load. (Some folks make a csv file and then issue a BULK INSERT on it, but I'd avoid that is it just gives you two files and a complex process [configuring drive mappings on deployment] to manage.)
The key thing to remember is that data (as stored in databases) can and will change over time. Rarely (if ever!) will you have the luxury of deleting your Production database and replacing it with a fresh, shiny, new one devoid of all that crufty data from the past umpteen years. Databases are all about changes over time, and that's where scripts come into their own. You start with the scripts to create the database, and then over time you add scripts that modify the database as changes come along -- and this applies to your static data (of any type) as well.
(Ultimately, my methodology is analogous to accounting: you have accounts, and as changes come in you adjust the accounts with journal entries. If you find you made a mistake, you never go back and modify your entries, you just make a subsequent entries to reverse and fix them. It's only an analogy, but the logic is sound.)
The solution I use is to have create and change scripts in source control, coupled with version information stored in the database.
Then, I have an install wizard that can detect whether it needs to create or update the db - the update process is managed by picking appropriate scripts based on the stored version information in the database.
See this thread's answer. Static data from your first two points should be in source control, IMHO.
Edit: *new
all-in-one or a separate script? it does not really matter as long as you (dev team) agree with your deployment team. I prefer to separate files, but I still can always create all-in-one.sql from those in the proper order [Logins, Roles, Users; Tables; Views; Stored Procedures; UDFs; Static Data; (Audit Tables, Audit Triggers)]
how do you make sure they execute it: well, make it another step in your application/database deployment documentation. If you roll out application which really needs specific (new) static data in the database, then you might want to perform a DB version check in your application. and you update the DB_VERSION to your new release number as part of that script. Then your application on a start-up should check it and report an error if the new DB version is required.
dev and non-dev static data: I have never seen this case actually. More often there is real static data, which you might call "dev", which is major configuration, ISO static data etc. The other type is default lookup data, which is there for users to start with, but they might add more. The mechanism to INSERT these data might be different, because you need to ensure you do not destoy (power-)user-created data.