I'm new at working on an as400 and I have a query the joins across 4 tables. The query itself is fine, it runs in STRSQL and displays the results.
What I am in struggling with is getting the query to be able to run programmatically (it will eventually be run from a scheduled CL script).
I tried have creating a physical file that contains the query running it with RUNQRY, but it simply displays the query itself, not the actual result set.
Does anyone know what I am doing wrong?
UPDATE
Thanks everyone for the direction and the resources, with them I was able to reach my goal. In case it helps anyone, this is what I ended up doing (all of this was done in it's own library, ALLOCATE):
Created a source physical file (using CRTSRCPF): QSQLSRC, and created a member named SQLLEAGSEA, with the type of TXT, that contains the SQL statement.
Created another source physical file: QCLSRC, and created a member named POPLEAGSEA, with the type of CLP, that changes the current library to ALLOCATE then runs the query using RUNSQLSTM (more detail on this below). Here is the actual command:
RUNSQLSTM SRCFILE(QSQLSRC) SRCMBR(SQLLEAGSEA) COMMIT(*NONE) NAMING(*SYS)
Added the CLP to the scheduled jobs (using ADDJOBSCDE), running the following command:
CALL PGM(ALLOCATE/POPLEAGSEA)
With regard to RUNSQLSTM, my research indicated that I wasn't going to be able to use this function, because it didn't support SELECT statements. What I didn't indicate in my question was what I needed to do with the the result - I was going to be inserting the resultant data into another table (had I done that I'm sure the help could have figured that out a lot quicker). So effectively, I wasn't going to be doing an SELECT, my end result is actually an INSERT. So my SQL statement (in SQLLEAGSEA) begins with:
INSERT INTO
ALLOCATE/LEAGSEAS
SELECT
...
BLAH BLAH BLAH
...
From my research, I gather that RUNSQLSTM doesn't support SELECT because it doesn't have a mechanism to do anything with the results. Once I stopped taking baby steps and realized I needed to SELECT AND INSERT in the same statement, it solved my main problem.
Thanks again everyone!
The command is RUNSQLSTM to run a static SQL statement in a physical file member or stream file.
It is a non-interactive command so it will not execute sql statements that attempt to return a result set.
If you want more control, including the ability to run interactive statements, see the Qshell db2 utility.
For example:
QSH CMD('db2 -f /QSYS.LIB/MYLIB.LIB/MYSRCFILE.FILE/MYSQL.MBR')
Note that the db2 utility only accepts the *SQL naming convention.
QM Query
If all the SQL you need is the single complex SQL statement, and this is what it sounds like, then your best bet is to use Query Management Query (see QM Query manual here).
The results can be directed to a display, a spool file, or a physical file (ie a DB2 table). The default output when run interactively is to the screen, but when run in a (scheduled) batch job it will default to a spool file report.
You can create the QM Query interactively via WRKQMQRY, in prompted mode (much like Query/400) or in SQL mode. Or you can compile the QM Query from source, with the CRTQMQRY command.
To run your QM Query, STRQMQRY command.
RUNSQL cmd
If you are using a system that has IBM i 7.1 fully up-to-date, and has Technology Refresh 4 (TR4) installed, then you could also use the new RUNSQL command to execute a single statement. (see discussion in developerWorks)
SQL Scripting w/ RUNSQLSTM cmd
From CL you can run SQL scripts of multiple SQL statements from a source file member. There is no standard default source file name for this, but QSQLSRC is commonly used. The source member can contain multiple non-interactive SQL statements. This means you cannot use a SELECT statement (directly) since theoretically it will not know where to send the results. CL commands are even allowed if given a CL: prefix. Both SQL and CL statements should be terminated with a semicolon ;. While the SQL statements cannot display data directly to the screen, the same restriction does not apply to the scripted CL commands.
The STRQMQRY command can be embedded in the RUNSQLSTM script, by placing the prefix "CL: " in front of the command. Since STRQMQRY can direct output to the screen, a report, or an output table, this can come in very useful.
Remember that to direct your output from a SELECT query to a file you can use either the INSERT or CREATE TABLE statements.
CREATE TABLE newtbl AS
( full-select )
WITH DATA;
Or, to put the results into a table you create in your job's QTEMP library:
DECLARE GLOBAL TEMPORARY TABLE AS
( full-select )
WITH DATA;
[Note: If you create the source to be used by CRTQMQRY, you are advised to create it as CRTSRCPF yourlib/QQMQRYSRC RCDLEN(91), since the compiler will only use 79 columns of your source data (adding 12 for sequence and change date =91). However for QM Forms, which can be used to provide additional formatting, the CRTQMFORM compiler will use 81 columns so RCDLEN(93) is advised for QQMFORMSRC.]
RUNQRY is a utility that lets you execute a query that was created by another utility named WRKQRY. If you really want to process SQL statements held in a file try RUNSQLSTM. It uses a source physical file to store the statements, not a database file. The standard name for that source physical file is QQMQRYSRC. To create that file, CRTSRCPF yourlib/QQMQRYSRC. Then you can use PDM to work with that source PF. WRKMBRPDM yourlib/QQMQRYSRC. Use F6 to create a new source member. Make it source type TXT. Then use option 2 to will start an editor called SEU. Copy/paste your SQL statements into this editor. F3 to save the source. Once the source is saved, use RUNSQLSTM to execute it.
It is (now) possible to run SQL directly in a CL program without using QM Query, RUNSQLSTM or QShell.
Here is an article that discusses the RUNSQL statement in CL programs...
http://www.mcpressonline.com/cl/the-cl-corner-introducing-the-new-run-sql-command.html
The article contains information on what OS levels are supported as well as clear examples of several ways to use the RUNSQL statement.
This will work in two steps:
RUNSQL SQL('CREATE TABLE QTEMP/REPORT AS (SELECT +
EXTRACT_DATE , SYSTEM, ODLBNM, SUM( +
OBJSIZE_MB ) AS LIB_SIZE FROM +
ZSYSCOM/DISKRPTHST WHERE ODLBNM LIKE +
''SIS%'' GROUP BY EXTRACT_DATE, SYSTEM, +
ODLBNM ORDER BY LIB_SIZE DESC) WITH +
DATA') COMMIT(*NONE) DATFMT(*USA) DATSEP(/)
RUNQRY QRYFILE((QTEMP/REPORT)) OUTTYPE(*PRINTER) +
OUTFORM(*DETAIL) PRTDFN(*NO) PRTDEV(*PRINT)
The first step creates a temporary table result in qtemp and the second step/line runs an adhoc query over just the temporary table to a spool file.
Thanks,
Michael Frilot
There is of course a totally different solution: You could write and compile a program containing the statement. It requires some longer reading into, especially if you are new to the platform, but it should give you most flexibility over what you do with results. You can use SQL in C, C++, RPG, RPG/LE, REXX, PL (of which I don't know, what it is) and COBOL. Doing that, you can react in any processable way on results from one query and start/create other queries based on what you get.
Although some oldfashioned RPG-programmers try everything to deny SQL in RPG exists, it is possible today for many cases, to write RPG-programs with SQL only and no direct file access (without F-Specs, for those who know RPG).
If your solution works for you, perfect. If you need to do something else, try a look into this pdf: http://publib.boulder.ibm.com/infocenter/iseries/v5r3/topic/rzajp/rzajp.pdf
The integration into RPG is not too bad. It works with the normal program flow. Would look something like this (in free form):
/free
// init search values:
searchval = 'Someguy';
// so the sql query:
exec sql
SELECT colum1, colum2
INTO :var1, :var2
FROM somelib/somefile
WHERE keycol=:searchval;
// now do something with the values:
some_proc(var1);
/end-free
In this, var1, var2, and searchval are ordinary RPG-variables. No quoting needed. Works also with datastructures (externally defined e.g., the record format of the file itself fits well). You can work with cursors and loops, too, of course. I feel that RPG-programs tend to be easier to read with this.
Related
I have a bat file that calls sql developer and spool out a query to a text file, however the result is all in one row, it seems it doesn't know how to identify a row.
For example, if I run the spool script in sql developer manually, the txt file looks perfect like this:
"Item","Qty","Price"
"A11","4","0.86"
"A12","3","0.56"
"A14","5","0.3"
But if I ran it with the bat file, it came out like this:
"Item","Qty","Price""A11","4","0.86""A12","3","0.56""A14","5","0.3"
Without the right format, when I import it to excel file, all the data are just in one cell.
I have tried all kinds of format like SET PAGESIZE, SET TERMOUT...but none of these work. In my another device I ran exactly the same code, and I do not have this problem.
bat file code:
#echo off
C:
cd C:\sqldeveloper\sqldeveloper\bin
sdcli migration -actions=mkconn,runsql -connDetails=target_oracle:oracle:XXXXX -conn=target_oracle -sql="C:\Desktop\1.sql"
1.sql:
spool "C:\Desktop\test.txt"
#C:\2.sql as script(F5);
spool off
2.sql:
Select /*csv*/ * From (
select * from item
);
I got stuck here for a while, if you have solution please let me know, thank you.
The answer is to use the right SQL Developer command line interface for the job.
We have two:
SDCLI - this is a headless version of the full SQL Developer program - fancy way of saying, no GUI. It can do things like perform a database export or invoke a Cart feature. Using it to run a SQL statement via the Migration task is like using a flamethrower to defrost your car's windshield - although probably not nearly as fun.
SQLcl - this is a command-line, interactive interface to the Oracle database. It's a java based version of SQL*Plus. It has the same code as SQL Developer when it comes to making connections, running scripts, etc - but it's only 20MB vs 200+MB, and it only need a JRE vs a JDK.
Both of these programs are in your sqldeveloper\bin folder - but SQLcl is also a separate, standalone, supported product.
So, to do what you want, you need to change this:
sdcli migration -actions=mkconn,runsql -connDetails=target_oracle:oracle:XXXXX -conn=target_oracle -sql="C:\Desktop\1.sql"
to this:
sql user/pwd#server:port/service #c:\users\jdsmith\desktop\1.sql
And your 1.sql can be this:
spool c:\users\jdsmith\desktop\locations-so.csv
Select /*csv*/ * From locations;
spool off
exit
Which gives us this
Bonus: SQLcl will run through this MUCH faster.
I'm getting a little confused about using parameters with SQL queries, and seeing some things that I can't immediately explain, so I'm just after some background info at this point.
First, is there a standard format for parameter names in queries, or is this database/middleware dependent ? I've seen both this:-
DELETE * FROM #tablename
and...
DELETE * FROM :tablename
Second - where (typically) does the parameter replacement happen? Are parameters replaced/expanded before the query is sent to the database, or does the database receive params and query separately, and perform the expansion itself?
Just as background, I'm using the DevArt UniDAC toolkit from a C++Builder app to connect via ODBC to an Excel spreadsheet. I know this is almost pessimal in a few ways... (I'm trying to understand why a particular command works only when it doesn't use parameters)
With such data access libraries, like UniDAC or FireDAC, you can use macros. They allow you to use special markers (called macro) in the places of a SQL command, where parameter are disallowed. I dont know UniDAC API, but will provide a sample for FireDAC:
ADQuery1.SQL.Text := 'DELETE * FROM &tablename';
ADQuery1.MacroByName('tablename').AsRaw := 'MyTab';
ADQuery1.ExecSQL;
Second - where (typically) does the parameter replacement happen?
It doesn't. That's the whole point. Data elements in your query stay data items. Code elements stay code elements. The two never intersect, and thus there is never an opportunity for malicious data to be treated as code.
connect via ODBC to an Excel spreadsheet... I'm trying to understand why a particular command works only when it doesn't use parameters
Excel isn't really a database engine, but if it were, you still can't use a parameter for the name a table.
SQL parameters are sent to the database. The database performs the expansion itself. That allows the database to set up a query plan that will work for different values of the parameters.
Microsoft always uses #parname for parameters. Oracle uses :parname. Other databases are different.
No database I know of allows you to specify the table name as a parameter. You have to expand that client side, like:
command.CommandText = string.Format("DELETE FROM {0}", tableName);
P.S. A * is not allowed after a DELETE. After all, you can only delete whole rows, not a set of columns.
I have the following problem, i have a SQL file to execute with DBI CPAN module Perl
I saw two solution on this website to solve my problem.
Read SQL file line by line
Read SQL file in one instruction
So, which one is better, and what the real difference between each solution ?
EDIT
It's for a library. I need to retrieve output and the return code.
Kind of files passed might be as following:
set serveroutput on;
set pagesize 20000;
spool "&1."
DECLARE
-- Récupération des arguments
-- &2: FLX_REF, &3: SVR_ID, &4: ACQ_STT, &5: ACQ_LOG, &6: FLX_COD_DOC, &7: ACQ_NEL, &8: ACQ_TYP
VAR_FLX_REF VARCHAR2(100):=&2;
VAR_SVR_ID NUMBER(10):=&3;
VAR_ACQ_STT NUMBER(4):=&4;
VAR_ACQ_LOG VARCHAR2(255):=&5;
VAR_FLX_COD_DOC VARCHAR2(30):=&6;
VAR_ACQ_NEL NUMBER(10):=&7;
VAR_ACQ_TYP NUMBER:=&8;
BEGIN
INSERT INTO ACQUISITION_CFT
(ACQ_ID, FLX_REF, SVR_ID, ACQ_DATE, ACQ_STT, ACQ_LOG, FLX_COD_DOC, ACQ_NEL, ACQ_TYP)
VALUES
(TRACKING.SEQ_ACQUISITION_CFT.NEXTVAL, ''VAR_FLX_REF'',
''VAR_SVR_ID'', sysdate, VAR_ACQ_STT, ''VAR_ACQ_LOG'',
''VAR_FLX_COD_DOC'', VAR_ACQ_NEL, VAR_ACQ_TYP);
END;
/
exit;
I have another question to ask, again with DBI Oracle module.
May i use the same code for SQL file and for Control file ?
(Example of SQL Control file)
LOAD DATA
APPEND INTO TABLE DOSSIER
FIELDS TERMINATED BY ';'
(
DSR_IDT,
DSR_CNL,
DSR_PRQ,
DSR_CEN,
DSR_FEN,
DSR_AN1,
DSR_AN2,
DSR_AN3,
DSR_AN4,
DSR_AN5,
DSR_AN6,
DSR_PI1,
DSR_PI2,
DSR_PI3,
DSR_PI4,
DSR_NP1,
DSR_NP2,
DSR_NP3,
DSR_NP4,
DSR_NFL,
DSR_NPG,
DSR_LTP,
DSR_FLF,
DSR_CLR,
DSR_MIM,
DSR_TIM,
DSR_NDC,
DSR_EMS NULLIF DSR_EMS=BLANKS "sysdate",
JOB_IDT,
DSR_STT,
DSR_DAQ "CASE WHEN :DSR_DAQ IS NOT NULL THEN SYSDATE ELSE NULL END"
)
Reading a table one row at a time is more complex, but it can use less memory - provided you structure your code to make use of the data per item and not need it all later.
Often you want to process each item separately (e.g. to do work on the data), in which case you might as well use the read line-by-line approach to define your loop.
I tend to use single-instruction approach by default, but as soon as I am concerned about number of records (especially in long-running batch processes), or need to loop through the data as the first task, then I read records one-by-one.
In fact, the two answers you reference propose the same solution, to read and execute line-by-line (but the first is clearer on the point). The second question has an optional answer, where the file contains a single statement.
If you don't execute the SQL line-by-line, it's very difficult to trap any errors.
"Line by line" only makes sense if each SQL statement is on a single line. You probably mean statement by statement.
Beyond that, it depends on what your SQL file looks like and what you want to do.
How complex is your SQL file? Could it contain things like this?
select foo from table where column1 = 'bar;'; --Get foo; it will be used later.
The simple way to read an SQL file statement by statement is to split by semicolons (or whatever the statement delimiter is). But this method will fail if you might have semicolons in other places, like comments or strings. If you split this statement by semicolons, you would try to execute the following four "commands":
select foo from table where column1 = 'bar;
';
--Get foo;
it will be used later.
Obviously, none of these are valid. Handling statements like this correctly is no simple matter. You have to completely parse SQL to figure out what the statements are. Unfortunately, there is no ready-made module that can do this for you (SQL::Script is a good start on an SQL file processing module, but according to the documentation it just splits on semicolons at this point).
If your SQL file is simple, not containing any statement delimiters within statements or comments; or if it is predictable in some other way (such as having one statement per line), then it is easy to split the file into statements and execute them one by one. But if you have to handle arbitrary SQL syntax, including cases such as above, this will be a complex task.
What kind of task?
Do you need to retrieve the output?
Is it important to detect errors in any individual statement, or is it just a batch job that you can run and not worry about it?
If this is something that you can just run and forget about, you could just have Perl execute a system command, telling Oracle to process the file. This will be simpler than handling all of the statements yourself. But if you need to process the results or handle errors within Perl, doing it yourself statement by statement will be a necessity.
Update: based on your response, you want to write a library that can handle arbitrary SQL statements. In that case, you definitely need to parse the SQL and execute the statements one at a time. This is do-able, but not simple. The possibility of BEGIN...END blocks means that you have to be able to correctly handle semicolons within a statement.
The SQL::Statement class of modules may be helpful.
Say I have written a Create Table script in a query window and run it. So the table got created. Now, where is this script file being generated (system table). I mean if I do a
select * from sys.syscomments
I will get the script for stored procedure or function in the "Text" column. Likewise any way of getting the same for table or view?
Any DMV etc...
Thanks in advance
I'm not sure where the script is stored, but if you're looking to be able to view the scripted language to create the table, in SQL2008 R2 (and I'm pretty sure SQL2008) SSMS can generate the script on the fly. Just select your Table (or View, SP, etc...) and right click. From the context menu, choose Script Table as Create To (or whatever other modification you choose from the list). Then you have a choice of output locations-the New Query Editor Window is probably easiest to see your results. From there you have the base language for that table's creation and you can modify or save it from there.
below query is also works for views
select * from sys.syscomments
For getting script of tables , you have to
Right Click on database >> tasks >> Generate Scripts >> choose objects (select tables)
I think that is the possible way of getting table scripts.
For views you can use the column VIEW_DEFINITION in:
SELECT *
FROM INFORMATION_SCHEMA.VIEWS
I don't think there is an equivalent for tables because tables don't require a stored definition because the table itself is the definition. If you need to copy table structures then the best method is probably to use:
SELECT *
INTO NewTable
FROM OldTable
WHERE 0 = 1
You could try generating your own scripts using the system views and creating dynamic SQL, something like this would get you started
This does not create constraints, and there maybe things I've missed, or cause it to break but the general gist is there. You could also do this outside of SQL as demonstrated quite nicely In this Answer.
HOWEVER I should add I do not condone the use of these methods. There are few scenarios I can think of where it would be necessary to programatically copy a tables structure in this fashion. If it is necessary to copy structures as a one off you'd be better off doing this using the generate script functions that are built into SSMS.
I would like to run some ad hoc select statements in the IBM System I Navigator tool for DB2 using a variable that I declare.
For example, in the SQL Server world I would easily do this in the SQL Server Management Studio query window like so:
DECLARE #VariableName varchar(50);
SET #VariableName = 'blah blah';
select * from TableName where Column = #VariableName;
How can I do something similar in the IBM System I Navigator tool?
I ran across this post while searching for the same question. My coworker provided the answer. It is indeed possible to declare variables in an ad hoc SQL statement in Navigator. This is how it is done:
CREATE OR REPLACE VARIABLE variableName VARCHAR(50);
SET variableName = 'blah';
SELECT * FROM table WHERE column = variableName;
DROP VARIABLE variableName;
If you don't drop the variable name it will hang around until who knows when...
At the moment, we're working on the same issue at work. Unfortunaly, we concluded that this is not possible. I agree, it would be great but it just doesn't work that way. iNavigator doesn't support SET or Define. You can do that in embedded SQL but this is not embedded SQL. Even if you create a separate document (xxx.sql), then need to open this document to run the script what makes it an interactive script (that is, DECLARE SECTION is not allowed).
As an alternative, in the SQL screen/script you can use CL:. Anything after this prefix is executed as CL command. You may manipulate your tables (e.g. RNMF) this way.
As a second alternative, the iSeries does support Rexx scripts (default installed with the os). Rexx is good dynamic script language and it does support embedded SQL. I've done that a lot of times and it works great. I even created scripts for our production environment.
Just create one 'default' script with an example PREPARE and CURSOR statement and copy at will. With that script you can play around. See the Rexx manual for the correct syntax of exec-sql. Also, you do have STDIN and STDOUT but you can use 'OVRDBF' to point to a database table (physical file). Just let me know if you need an example Rexx script.
Notice that the manual "SQL embedded programming" does have Rexx examples.
Here are a couple of other alternatives.
Data Transfer Tool - You can run the iSeries Data Transfer Tool from the command line (RTOPCB). First, run the GUI version and create a definition file. If you edit this file with a text editor, you will see that this is just an old-fashioned INI file and you can easily find the line with the query in it. From there, you could write a batch file or otherwise pre-process the text file to allow you to manipulate the query before submitting it to the query tool.
QSHELL - If you can log on to the iSeries interactively, then you may find the QSHELL environment more familiar than CL or REXX (although REXX is kind of fun). QSHELL is a full POSIX environment running on the iSeries. Use the command STRQSH to start QSHELL. You can have ksh or csh as a shell. Inside QSHELL, there is a command called "db2" that submits queries. So, you should be able to do something like this inside QSHELL:
system> VariableName = 'blah blah'
system> db2 "select * from TableName where Column = \'$VariableName\'"
You may have to fiddle with the quotes to get ksh to pass them correctly.
Also, inside QSHELL, you should have a full Perl installation that will allow you to use DBI to get data.
Some other ways to interact with data on the iSeries: query from the client with Python via ODBC; query from the client with Jython via JDBC; install Jython directly on the iSeries and then query via JDBC.