Choose different division of Exact Online when using distributed query with Invantive SQL - sql

I have a set of SQL statements using distributed option of Invantive SQL that extract shipped goods information from Exact Online and create for each serial number shipped a ticket in Freshdesk, together with the consumer as a contact.
This works fine when connected to Exact Online and Freshdesk under one log on code. However, the end user uses a different log on code. In that case the set of SQL statements retrieves data from their test division in Exact Online instead of the correct production division.
When using no distributed option, I can change the division using:
use 123123
Where 123123 is the unique division number in the Exact Online country.
When connected both to Exact Online and Freshdesk, I get a:
itgenuse002: List of partitions could not be determined.
How can I enforce that the set of SQL statements is executed for a specific Exact Online division instead of the default one set at that moment for the log on code?
Sample SQL query that shows the problem:
create or replace table fulladdress#inmemorystorage --STAP 1.
as
select acad.id
, acad.name
, acad.phone
, acad.email
, acad.addressline1 || ' ' || acad.postcode || ' ' || acad.city fulladdress
from ExactonlineREST..Accounts#eolnl acad
where acad.status = 'C'

The use statement shown is for databases with exactly one data container. In that case, there is only one data container that can handle the question and everything runs smooth.
With a distributed query in Invantive SQL, you need to direct the use statement which data container to use. Otherwise, the first data container will try to handle it (in this case probably Freshdesk which has no concept of partitioning). That is similar to appending the data container alias to each tables as in:
select ...
from table#eolnl
join table2#freshdesk
on ...
Here eolnl and freshdesk specify where the tables should looked up.
So, in this case use:
use 123123#eolnl
The same also holds for the set statement.

From your code it seems you have multiple data containers running in your connection. From the user interface you can only set the partitions on the default data container.
However, there is an easy code solution to use. You have to know the alias of the data container you want to set the partition for. The use that alias in a use call (in this sample 123123 is the partition you want to choose):
use 123123#eolnl
Or to use all partitions available:
use all#eolnl

Related

How can I avoid hard coding in SQL Query

I have a query like Select '16453842' AS ACCOUNT, I want to change this Account number to a dynamic one. Can anyone tell me what are some possible ways to do it?
The scenario is like, I would also like to accommodate other values as well for the ACCOUNT. So that multiple values can be used for ACCOUNT.
the Nodejs code uses that sql in below code by carrierData.sql
writeDebug({'shipment': shipment, 'carrier': carrier, 'message': 'reading sql file: ' + carrierData.sql});
fs.readFile(carrierData.sql, 'utf8', function (err, sql) {
The carrierData is coming from a json file from where the sql contains the path and name of the SQL which it is going to use. And finally the SQL file which have the query runs a query like below
SELECT 'T' AS RECORD
, '16453842' AS ACCOUNT
and here lies my problem as we have some additional ACCOUNT numbers as well which we would like to accommodate.
And the service starts by node server.js which will call the workers.js file which contains the code that I pasted above.
So please let me know what can be the possible ways to do this
Here are things to research:
Using bind variables. These are used for security and for performance when a statement is executed multiple times.
Using multiple values in IN clauses.
Using executeMany() if you are loading data into the database.

How can I schedule a script in BigQuery?

At last BigQuery supports using ; in the queries, so I can write more than one query in one "block", if I seperate them with semicolon.
If I run the code manually, it works. But I cannot schedule that.
When I want to schedule, I have two choices:
(New) Web UI: I must give a destination table. If I don't do it, I could not save the scheduled query. But all my queries are updates and inserts with different "destination tables". Like these:
UPDATE project.exampledataset.a
SET date = current_date()
WHEN TRUE
;
INSERT INTO project.otherdataset.b
SELECT c,d
FROM project.otherdataset.c
So I cannot even make a scheduling in the Web UI.
Classic UI: I tried this, because the official documentary states, that I should leave the "destination table" blank, and Classic UI allows it. I can setup the scheduling, but it doesn't run, when it should. I get the error message in email "Error status: Dataset specified in the query ('') is not consistent with Destination dataset 'exampledataset'."
AIK scripting (and using semicolon) is a very new feature in BigQuery, but I hope someone can help me.
Yes, I know that I could schedule every query one by one, but I would like to resolve it with one big script.
Looks like the scheduled query was defined earlier with destination dataset defined with APPEND/TRUNCATE type transaction. While updating the same scheduled query to a DML query, GUI doesn't show the dataset field / table name to update to NULL. Hence this error is coming considering the previously set dataset and table name in the scheduled query.
Hence the fix is to delete the scheduled query and create it from scratch with DML query option. It worked for me.
Scripting is supported in scheduled query now. However, scripting query, when being scheduled, doesn't support setting a destination table for now. You still need to use DDL/DML to make change to existing table.
E.g.:
CREATE OR REPLACE TABLE destinationTable AS
SELECT *
FROM sourceTable
WHERE date >= maxDate
As of 2022, the BQ Console UI will let you create a new scheduled query without a destination dataset, but it won't let you update a prior SELECT to use DDL/DML block syntax. However, you can use the BigQuery Data Transfer API to update the destinationDatasetId field, via transferconfigs/patch. Use transferconfigs/list to get the configId for a given scheduled query.
Note that you can either use the in-browser API Explorer, if you have the appropriate credentials, or write a programmatic solution. Also seems useful for setting/updating any other fields, including renaming scheduled queries.

How does Tableau run queries on Redshift? (And/or why can't Redshift display Tableau queries?)

I'm kicking tires on BI tools, including, of course, Tableau. Part of my evaluation includes correlating the SQL generated by the BI tool with my actions in the tool.
Tableau has me mystified. My database has 2 billion things; however, no matter what I do in Tableau, the query Redshift reports as having been run is "Fetch 10000 in SQL_CURxyz", i.e. a cursor operation. In the screenshot below, you can see the cursor ids change, indicating new queries are being run -- but you don't see the original queries.
Is this a Redshift or Tableau quirk? Any idea how to see what's actually running under the hood? And why is Tableau always operating on 10000 records at a time?
I just ran into the same problem and wrote this simple query to get all queries for currently active cursors:
SELECT
usr.usename AS username
, min(cur.starttime) AS start_time
, DATEDIFF(second, min(cur.starttime), getdate()) AS run_time
, min(cur.row_count) AS row_count
, min(cur.fetched_rows) AS fetched_rows
, listagg(util_text.text)
WITHIN GROUP (ORDER BY sequence) AS query
FROM STV_ACTIVE_CURSORS cur
JOIN stl_utilitytext util_text
ON cur.pid = util_text.pid AND cur.xid = util_text.xid
JOIN pg_user usr
ON usr.usesysid = cur.userid
GROUP BY usr.usename, util_text.xid;
Ah, this has already been asked on the AWS forums.
https://forums.aws.amazon.com/thread.jspa?threadID=152473
Redshift's console apparently doesn't display the query behind cursors. To get that, you can query STV_ACTIVE_CURSORS: http://docs.aws.amazon.com/redshift/latest/dg/r_STV_ACTIVE_CURSORS.html
Also, you can alter your .TWB file (which is really just an xml file) and add the following parameters to the odbc-connect-string-extras property.
UseDeclareFetch=0;
FETCH=0;
You would end up with something like:
<connection class='redshift' dbname='yourdb' odbc-connect-string-extras='UseDeclareFetch=0;FETCH=0' port='0000' schema='schm' server='any.redshift.amazonaws.com' [...] >
Unfortunately there's no way of changing this behavior trough the application, you must edit the file directly.
You should be aware of the performance implications of doing so. While this greatly enhances debugging there must be a reason why Tableau chose not to allow modification of these parameters trough the application.

How do I programmatically run a complex query on an as400?

I'm new at working on an as400 and I have a query the joins across 4 tables. The query itself is fine, it runs in STRSQL and displays the results.
What I am in struggling with is getting the query to be able to run programmatically (it will eventually be run from a scheduled CL script).
I tried have creating a physical file that contains the query running it with RUNQRY, but it simply displays the query itself, not the actual result set.
Does anyone know what I am doing wrong?
UPDATE
Thanks everyone for the direction and the resources, with them I was able to reach my goal. In case it helps anyone, this is what I ended up doing (all of this was done in it's own library, ALLOCATE):
Created a source physical file (using CRTSRCPF): QSQLSRC, and created a member named SQLLEAGSEA, with the type of TXT, that contains the SQL statement.
Created another source physical file: QCLSRC, and created a member named POPLEAGSEA, with the type of CLP, that changes the current library to ALLOCATE then runs the query using RUNSQLSTM (more detail on this below). Here is the actual command:
RUNSQLSTM SRCFILE(QSQLSRC) SRCMBR(SQLLEAGSEA) COMMIT(*NONE) NAMING(*SYS)
Added the CLP to the scheduled jobs (using ADDJOBSCDE), running the following command:
CALL PGM(ALLOCATE/POPLEAGSEA)
With regard to RUNSQLSTM, my research indicated that I wasn't going to be able to use this function, because it didn't support SELECT statements. What I didn't indicate in my question was what I needed to do with the the result - I was going to be inserting the resultant data into another table (had I done that I'm sure the help could have figured that out a lot quicker). So effectively, I wasn't going to be doing an SELECT, my end result is actually an INSERT. So my SQL statement (in SQLLEAGSEA) begins with:
INSERT INTO
ALLOCATE/LEAGSEAS
SELECT
...
BLAH BLAH BLAH
...
From my research, I gather that RUNSQLSTM doesn't support SELECT because it doesn't have a mechanism to do anything with the results. Once I stopped taking baby steps and realized I needed to SELECT AND INSERT in the same statement, it solved my main problem.
Thanks again everyone!
The command is RUNSQLSTM to run a static SQL statement in a physical file member or stream file.
It is a non-interactive command so it will not execute sql statements that attempt to return a result set.
If you want more control, including the ability to run interactive statements, see the Qshell db2 utility.
For example:
QSH CMD('db2 -f /QSYS.LIB/MYLIB.LIB/MYSRCFILE.FILE/MYSQL.MBR')
Note that the db2 utility only accepts the *SQL naming convention.
QM Query
If all the SQL you need is the single complex SQL statement, and this is what it sounds like, then your best bet is to use Query Management Query (see QM Query manual here).
The results can be directed to a display, a spool file, or a physical file (ie a DB2 table). The default output when run interactively is to the screen, but when run in a (scheduled) batch job it will default to a spool file report.
You can create the QM Query interactively via WRKQMQRY, in prompted mode (much like Query/400) or in SQL mode. Or you can compile the QM Query from source, with the CRTQMQRY command.
To run your QM Query, STRQMQRY command.
RUNSQL cmd
If you are using a system that has IBM i 7.1 fully up-to-date, and has Technology Refresh 4 (TR4) installed, then you could also use the new RUNSQL command to execute a single statement. (see discussion in developerWorks)
SQL Scripting w/ RUNSQLSTM cmd
From CL you can run SQL scripts of multiple SQL statements from a source file member. There is no standard default source file name for this, but QSQLSRC is commonly used. The source member can contain multiple non-interactive SQL statements. This means you cannot use a SELECT statement (directly) since theoretically it will not know where to send the results. CL commands are even allowed if given a CL: prefix. Both SQL and CL statements should be terminated with a semicolon ;. While the SQL statements cannot display data directly to the screen, the same restriction does not apply to the scripted CL commands.
The STRQMQRY command can be embedded in the RUNSQLSTM script, by placing the prefix "CL: " in front of the command. Since STRQMQRY can direct output to the screen, a report, or an output table, this can come in very useful.
Remember that to direct your output from a SELECT query to a file you can use either the INSERT or CREATE TABLE statements.
CREATE TABLE newtbl AS
( full-select )
WITH DATA;
Or, to put the results into a table you create in your job's QTEMP library:
DECLARE GLOBAL TEMPORARY TABLE AS
( full-select )
WITH DATA;
[Note: If you create the source to be used by CRTQMQRY, you are advised to create it as CRTSRCPF yourlib/QQMQRYSRC RCDLEN(91), since the compiler will only use 79 columns of your source data (adding 12 for sequence and change date =91). However for QM Forms, which can be used to provide additional formatting, the CRTQMFORM compiler will use 81 columns so RCDLEN(93) is advised for QQMFORMSRC.]
RUNQRY is a utility that lets you execute a query that was created by another utility named WRKQRY. If you really want to process SQL statements held in a file try RUNSQLSTM. It uses a source physical file to store the statements, not a database file. The standard name for that source physical file is QQMQRYSRC. To create that file, CRTSRCPF yourlib/QQMQRYSRC. Then you can use PDM to work with that source PF. WRKMBRPDM yourlib/QQMQRYSRC. Use F6 to create a new source member. Make it source type TXT. Then use option 2 to will start an editor called SEU. Copy/paste your SQL statements into this editor. F3 to save the source. Once the source is saved, use RUNSQLSTM to execute it.
It is (now) possible to run SQL directly in a CL program without using QM Query, RUNSQLSTM or QShell.
Here is an article that discusses the RUNSQL statement in CL programs...
http://www.mcpressonline.com/cl/the-cl-corner-introducing-the-new-run-sql-command.html
The article contains information on what OS levels are supported as well as clear examples of several ways to use the RUNSQL statement.
This will work in two steps:
RUNSQL SQL('CREATE TABLE QTEMP/REPORT AS (SELECT +
EXTRACT_DATE , SYSTEM, ODLBNM, SUM( +
OBJSIZE_MB ) AS LIB_SIZE FROM +
ZSYSCOM/DISKRPTHST WHERE ODLBNM LIKE +
''SIS%'' GROUP BY EXTRACT_DATE, SYSTEM, +
ODLBNM ORDER BY LIB_SIZE DESC) WITH +
DATA') COMMIT(*NONE) DATFMT(*USA) DATSEP(/)
RUNQRY QRYFILE((QTEMP/REPORT)) OUTTYPE(*PRINTER) +
OUTFORM(*DETAIL) PRTDFN(*NO) PRTDEV(*PRINT)
The first step creates a temporary table result in qtemp and the second step/line runs an adhoc query over just the temporary table to a spool file.
Thanks,
Michael Frilot
There is of course a totally different solution: You could write and compile a program containing the statement. It requires some longer reading into, especially if you are new to the platform, but it should give you most flexibility over what you do with results. You can use SQL in C, C++, RPG, RPG/LE, REXX, PL (of which I don't know, what it is) and COBOL. Doing that, you can react in any processable way on results from one query and start/create other queries based on what you get.
Although some oldfashioned RPG-programmers try everything to deny SQL in RPG exists, it is possible today for many cases, to write RPG-programs with SQL only and no direct file access (without F-Specs, for those who know RPG).
If your solution works for you, perfect. If you need to do something else, try a look into this pdf: http://publib.boulder.ibm.com/infocenter/iseries/v5r3/topic/rzajp/rzajp.pdf
The integration into RPG is not too bad. It works with the normal program flow. Would look something like this (in free form):
/free
// init search values:
searchval = 'Someguy';
// so the sql query:
exec sql
SELECT colum1, colum2
INTO :var1, :var2
FROM somelib/somefile
WHERE keycol=:searchval;
// now do something with the values:
some_proc(var1);
/end-free
In this, var1, var2, and searchval are ordinary RPG-variables. No quoting needed. Works also with datastructures (externally defined e.g., the record format of the file itself fits well). You can work with cursors and loops, too, of course. I feel that RPG-programs tend to be easier to read with this.

SQL statement against Access 2010 DB not working with ODBC

I'm attempting to run a simple statement against an Access DB to find records.
Data validation in the records was horrible, and I cannot sanitize it. Meaning, it must be preserved as is.
I need to be able to search against a string with white space and hyphen characters removed. The following statement will work in Access 2010 direct:
select * from dummy where Replace(Replace([data1],' ',''),'-','') = 'ABCD1234';
Running it from an ODBC connection via PHP will not. It produces the following error:
SQL error: [Microsoft][ODBC Microsoft Access Driver] Undefined function 'Replace' in expression., SQL state 37000 in SQLExecDirect
Creating a query in the database that runs the function and attempting to search its values indirectly causes the same error:
select * from dummy_indirect where Expr1 = 'ABCD1234';
I've attempted to use both ODBC drivers present. ODBCJR32.dll (03/22/2010) and ACEODBC.dll (02/18/2007). To my knowledge these should be current as it was installed with the full Access 2010 and Access 2010 Database Engine.
Any ideas on how to work around this error and achieve the same effect are welcome. Please note, that I cannot alter the database in way, shape, or form. That indirect query was created in another mdb file that has the original tables linked from the original DB.
* Update *
OleDB did not really affect anything.
$dsn= "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=c:\dummy.mdb;";
I'm not attempting to use it as a web backend either. I'm not a sadomasochist.
There is a legacy system that I must support that does use Access as a backend. Data gets populated there from other old systems that I must integrate into more modern systems. Hence, the creation of an API with Apache/PHP that is running on the server supporting the legacy system.
I need to be able to search a table that has an alphanumeric case identifier to get a numeric identifier that is unique and tied to a generator (Autonumber in access). Users have been using it a trash box for years (inconsistent data entry with sporadic notations) so the only solution I have is to strip everything except alphanumeric out of both the field value and the search value and attempt to perform a LIKE comparison against it.
If not replace() which is access supported, what ODBC compatible functions exist that I can use do the same kind of comparison?
Just to recap, the Access db engine will not recognize the Replace() function unless your query is run from within an Access application session. Any attempt from outside Access will trigger that "Undefined function" error message. You can't avoid the error by switching from ODBC to OleDb as the connection method. And you also can't trick the engine into using Replace() by hiding it in separate query (in the same or another Access db) and using that query as the data source for your main query.
This behavior is determined by Access' sandbox mode. That linked page includes a list of functions which are available in the default sandbox mode. That page also describes how you can alter the sandbox mode. If you absolutely must have Replace() available for your query, perhaps the lowest setting (0) would allow it. However, I'm not recommending you do that. I've never done it myself, so don't know anything about the consequences.
As for alternatives for Replace(), it would help to know about the variability in the values you're searching. If the space or dash characters appear in only one or a few consistent positions, you could do a pattern match with a Like expression. For example, if the search field values consist of 4 letters, an optional space or dash, followed by 4 digits, a WHERE clause like this should work for the variations of "ABCD1234":
SELECT * FROM dummy
WHERE
data1 = 'ABCD1234'
OR data1 Like 'ABCD[- ]1234';
Another possibility is to compare against a list of values:
SELECT * FROM dummy
WHERE
data1 IN ('ABCD1234','ABCD 1234','ABCD-1234');
However if your search field values can include any number of spaces or dashes at any position within the string, that approach is no good. And I would look real hard for some way to make the query task easier:
You can't clean the stored values because you're prohibited from altering the original Access db in any way. Perhaps you could create a new Access db, import the data, and clean that instead.
Set up the original Access db as a linked server in SQL Server and build your query to take advantage of SQL Server features.
Surrender. :-( Pull in a larger data set to your PHP client code, and evaluate which rows to use vs. which to ignore.
I'm not sure you can do this with ODBC and your constraints. The MS Access driver is limited (by design; MS wants you to use SQL Server for back ends).
Can you use OLEDB? that might be an option.