Qlikview- How to unqualify/concatenate many rows from many differents tables? - qlikview

I have multiple tables and each of them some fields are identical and several others are different. When I try to load them all at the same time the program "hangs" and I have to restart the application. It seems to me that the solution would be to use Qualify and unqualify or another script. I want all fields that are equal be concatenated. however there are tables that contain up to 229 columns.
I need from the key fields that I will be able to concatenate the information without losing the value of each field ..
How should I proceed to make all the same columms that are equal as a "KEY" without to need to list all of them?
this is how iam using the script..
LOAD Nome as Comarca,
Vara,
Entrancia,
Juiz,
Escrivao,
NomeMapa,
IdComarca,
Mes,
Ano,
MatJuiz,
IdVara,
IdEscrivao,
IdMapa,
DataFechamentoJuiz,
DataFechamentoEscrivao,
TitularRespondendo,
AndCausOrdiMesAnt,
AndCausOrdiAutu,
AndCausOrdiArqui,
AndCausOrdiAnda,
AndCausSumMesAnt,
AndCausSumAutu,
AndCausSumArqui,
AndCausSumAnda,
AndProcCautMesAnt,
AndProcCautAutu,
AndProcCautArqui,
AndProcCautAnda,
AndEmbarMesAnt,
AndEmbarAutu,
AndEmbarArqui,
AndEmbarAnda,
AndDemaisMesAnt,
AndDemaisAutu,
AndDemaisArqui,
AndDemaisAnda,
AndExecTotMesAnt,
AndExecTotAutu,
AndExecTotArqui,
AndExecTotAnda,
AndTituloExMesAnt,
AndTituloExAutu,
AndTituloExArqui,
AndTituloExAnda,
AndTituloJudMesAnt,
AndTituloJudAutu,
AndTituloJudArqui,
AndTituloJudAnda,
AndExecFiscMesAnt,
AndExecFiscAutu,
AndExecFiscArqui,
AndExecFiscAnda,
AndFedMesAnt,
AndFedAutu,
AndFedArqui,
AndFedAnda,
AndEstMesAnt,
AndEstAutu,
AndEstArqui,
AndEstAnda,
AndMuniMesAnt,
AndMuniAutu,
AndMuniArqui,
AndMuniAnda,
AndFalenMesAnt,
AndFalenAutu,
AndFalenArqui,
AndFalenAnda,
AndProcJuriMesAnt,
AndProcJuriAutu,
AndProcJuriArqui,
AndProcJuriAnda,
AndAcoPrevMesAnt,
AndAcoPrevAutu,
AndAcoPrevArqui,
AndAcoPrevAnda,
AndInciMesAnt,
AndInciAutu,
AndInciArqui,
AndInciAnda,
AndAcoIndeMesAnt,
AndAcoIndeAutu,
AndAcoIndeArqui,
AndAcoIndeAnda,
AndMandaMesAnt,
AndMandaAutu,
AndMandaArqui,
AndMandaAnda,
AndAcaCivMesAnt,
AndAcaCivAutu,
AndAcaCivArqui,
AndAcaCivAnda,
AndAcoTrabMesAnt,
AndAcoTrabAutu,
AndAcoTrabArqui,
AndAcoTrabAnda,
AndOutMesAnt,
AndOutAutu,
AndOutArqui,
AndOutAnda,
AndTotalMesAnt,
AndTotalAutu,
AndTotalArqui,
AndTotalAnda,
AndPrecMesAnt,
AndPrecAutu,
AndPrecArqui,
AndPrecAnda,
AndExecMesAnt,
AndExecAutu,
AndExecArqui,
AndExecAnda,
AndExecPenMesAnt,
AndExecPenAutu,
AndExecPenArqui,
AndExecPenAnda,
AndExecSuspMesAnt,
AndExecSuspAutu,
AndExecSuspArqui,
AndExecSuspAnda,
AndExecFisMesAnt,
AndExecFisAutu,
AndExecFisArqui,
AndExecFisAnda,
AndIncidProcJulg,
AndIncidProcExecJulg,
ProcConDist2005,
EmbExecDist2005,
ProcConDist2006MesAnt,
ProcConDist2006Julga,
ProcConDist2006Anda,
EmbaExec2006MesAnt,
EmbaExec2006Julga,
EmbaExec2006Anda,
MovProcConcPer,
MovProcConcl,
MovProcVistaMP,
MovProcCargaMP,
MovProcVistaPart,
MovProcOutTotal,
MovProcAudi,
MovProcCumpri,
MovProcDev,
MovProcPericia,
MovProcPubEdit,
MovProcProvEscriv,
MovProcSusp,
MovProcOutSitu,
MovProcArquiBaixa,
MovRecurInter,
MovRecurJulgAgravo,
MovRecurJulgapelacao,
MovRecurJulgtotal,
MovRecurProvAgravo,
MovRecurProvApelacao,
MovRecurProvTotal,
MovRecurInterFase,
MovRecurInterPend,
MovPrecNum,
MovPrecDataDist,
MovPrecDataUlt,
MovPrecDevTot,
MovPrecDevCit,
MovPrecDevOut,
RemTJMesAnt,
RemTJMesAtual,
RemTJDevolvTJ,
RemTJTotal,
RemOutTJMesAnt,
RemOutTJMesAtual,
RemOutTJDevolvTJ,
RemOutTJTotal,
RemOutComMesAnt,
RemOutComMesAtual,
RemOutComDevolvTJ,
RemOutComTotal,
RemRediOutMesAnt,
RemRediOutMesAtual,
RemRediOutDevolvTJ,
RemRediOutTotal,
RemOutrasInfo,
CustasProc,
CustasTaxaJudi,
CustasOutras,
AtosSentResMeritoTotal,
AtosSentResMeritoConhe,
AtosSentResMeritoCautelar,
AtosSentHomoTotal,
AtosSentHomoConhe,
AtosSentHomoCautelar,
AtosSentSemResolMeritoTotal,
AtosSentSemResolMeritoConhe,
AtosSentSemResolMeritoCautelar,
AtosMSentExecTotal,
AtosSentExecFiscal,
AtosMSentExecTitJud,
AtosMSentExecTitExt,
AtosDecisaoTotal,
AtosDecisaoLiminar,
AtosDecisaoOutras,
AtosDespProf,
AtosDespProfPlantao,
AtosAudRealizTotal,
AtosAudIntru,
AtosAudJulg,
AtosAudConcil,
AtosAudOutros,
AtosAudNRealiz,
AtosAudDesig,
AtosAcordoAudi,
AtosSentProfAudi,
AtosPesOuvAudi,
AtosDataAudiAfast,
AtosAutosConcSent,
AtosAutosConcPratica,
AtosAutosConcTotal,
AtosAutosConcSent100,
AtosAutosConcDiv100,
AtosDataConcAntiga,
AtosDecSusp,
AtosMandPriCivil,
AtosPresosCiveis,
AtosProcAntTramitNum,
AtosProcAntTramitData,
AtosProcAntTramitDUM,
AtosPrecAntTramitNum,
AtosPrecAntTramiData,
AtosPrecAntTramiDUM,
AtosPrecDevTotal,
AtosPrecDevCitacao,
AtosPrecDevOutras,
AtosInfTJ,
AtosOutrasAtividades,
Ferias,
MatSubstituicao,
MatAssinatura,
DataIniFerias,
DataFimFerias,
RemetOutraVara
FROM
[Z:\QLIKVIEW\Todos os Mapas\Area Cível.xlsx]
(ooxml, embedded labels, table is AreaCivil);
This a full list of rows of 1 of 16 tables.. some hows are equal in each table and some are different.

probably two different issues here. First, it hangs on you very likely because of memory issues - enable the detailed error log in the document settings so you can get some details during document reload.
Next, if i understand you correctly, you want to concatenate all 16 files to one table and these files have common columns and some different ones?
you have a few options here but I would recommend to manually rename common fields in your load script and also add columns you need to load from some other files to all other files but they will be blank if we are not on a specific file.
For example,
file1 has columns key1,key2, c1, c2
and file2 has columns key1,key2, c1,c3
you can load them separately in the load script, but then you load file1, add blank column c3 and for file2, add blank column c2 - not to the actual files but to your load script statement.
you can also use forced concatenation by using CONCATENATE keyword before your load statement but I personally like to have control of QV load script myself.

Related

Regex comparison in Oracle between 2 varchar columns (from different tables)

I am trying to find a way to capture relevant errors from oracle alertlog. I have one table (ORA_BLACKLIST) with column values as below (these are the values which I want to ignore from
V$DIAG_ALERT_EXT)
Below are sample data in ORA_BLACKLIST table. This table can grow based on additional error to ignore from alertlog.
ORA-07445%[kkqctdrvJPPD
ORA-07445%[kxsPurgeCursor
ORA-01013%
ORA-27037%
ORA-01110
ORA-2154
V$DIAG_ALERT_EXT contains a MESSAGE_TEXT column which contains sample text like below.
ORA-01013: user requested cancel of current operation
ORA-07445: exception encountered: core dump [kxtogboh()+22] [SIGSEGV] [ADDR:0x87] [PC:0x12292A56]
ORA-07445: exception encountered: core dump [java_util_HashMap__get()] [SIGSEGV]
ORA-00600: internal error code arguments: [qercoRopRowsets:anumrows]
I want to write a query something like below to ignore the black listed errors and only capture relevant info like below.
select
dae.instance_id,
dae.container_name,
err_count,
dae.message_level
from
ORA_BLACKLIST ob,
V$DIAG_ALERT_EXT dae
where
group by .....;
Can someone suggest a way or sample code to achieve it?
I should have provided the exact contents of blacklist table. It currently contains some regex (perl) and I want to convert it to oracle like regex and compare with v$diag_alert_ext message_text column. Below are sample perl regex in my blacklist table.
ORA-0(,|$| )
ORA-48913
ORA-00060
ORA-609(,|$| )
ORA-65011
ORA-65020 ORA-31(,|$| )
ORA-7452 ORA-959(,|$| )
ORA-3136(,|)|$| )
ORA-07445.[kkqctdrvJPPD
ORA-07445.[kxsPurgeCursor –
Your blacklist table looks like like patterns, not regular expressions.
You can write a query like this:
select dae.* -- or whatever columns you want
from V$DIAG_ALERT_EXT dae
where not exists (select 1
from ORA_BLACKLIST ob
where dae.message_text like ob.<column name>
);
This will not have particularly good performance if the tables are large.

import a txt file with 2 columns into different columns in SQL Server Management Studio

I have a txt file containing numerous items in the following format
DBSERVER: HKSER
DBREPLICAID: 51376694590
DBPATH: redirect.nsf
DBTITLE: Redirect AP
DATETIME: 09.03.2015 09:44:21 AM
READS: 1
Adds: 0
Updates: 0
Deletes: 0
DBSERVER: HKSER
DBREPLICAID: 21425584590
DBPATH: redirect.nsf
DBTITLE: Redirect AP
DATETIME: 08.03.2015 09:50:20 PM
READS: 2
Adds: 0
Updates: 0
Deletes: 0
.
.
.
.
please see the source capture here
I would like to import the txt file into the following format in SQL
1st column 2nd column 3rd column 4th column 5th column .....
DBSERVER DBREPLICAID DBPATH DBTITLE DATETIME ......
HKSER 51376694590 redirect.nsf Redirect AP 09.03.2015 09:44:21 AM
HKSER 21425584590 redirect.nsf Redirect AP 08.03.2015 01:08:07 AM
please see the output capture here
Thanks a lot!
You can dump that file into a temporary table, with just a single text column. Once imported, you loop through that table using a cursor, storing into variables the content, and every 10 records inserting a new row to the real target table.
Not the most elegant solution, but it's simple and it will do the job.
Using Bulk insert you can insert these headers and data in two different columns and then using dynamic sql query, you can create a table and insert data as required.
For Something like this I'd probably use SSIS.
The idea is to create a Script Component (As a Transformation)
You'll need to manually define your Output cols (Eg DBSERVER String (100))
The Src is your File (read Normally)
The Idea is that you build your rows line by line then add the full row to the Output Buffer.
Eg
Output0Buffer.AddRow();
Then write the rows to your Dest.
If all files have a common format then you can wrap the whole thiing in a for each loop

Ordering by sub string of a field with OPNQRYF

I have a requirement where I need to change the order in which records are printed in a Report. I need to order the records by a substring of a field of the records.
There is an OPNQRYF as below before the call to the report RPG is made:
OVRDBF FILE(MOHDL35) SHARE(*YES)
BLDQRYSLT QRYSLT(&QRYSLT) +
SELECT((CHARDT *GE &FRDATE F2) +
(CHARDT *LE &TODATE F2) +
(HDPLVL *EQ 'FS' F2) +
(HDMPLT *EQ &PLANT F2))
OPNQRYF FILE((*LIBL/MOHDL35)) +
QRYSLT(&QRYSLT) +
KEYFLD(*FILE) +
MAPFLD((ZONEDT HDAEDT *ZONED 8 0) +
(CHARDT ZONEDT *CHAR 8))
One way I see how to do this is to do a RUNSQL to create a temp table in qtemp with the MOHDL35 records in the required order. The substr SQL function would help to achieve this much easier. This should have the same structure as that of MOHDL35 (FIELD NAMES, RECORD FORMAT)
Then replace the use of this file in the RPG program with the newly created table name. I havent tried this yet, but would this work? does it sound like a good idea? Are there any better suggestions?
You can do that with OPNQRYF by using the MAPFLD parameter like this:
OPNQRYF FILE((JCVCMP))
KEYFLD((*MAPFLD/PART))
MAPFLD((PART '%SST(VCOMN 2 5)'))
The fields in JCVCOMN are now sorted like this:
VENNO VCMTP VCMSQ VCOMN
----- ----- ----- -------------------------
1,351 ICL 3 Let's see what wow
1,351 ICL 1 This is a test
1,351 NDA 2 another comment
1,351 NDA 1 more records
Notice that the records are sorted by the substring of VCOMN starting with the second character.
So here is your OPNQRYF with multiple key fields specified
OPNQRYF FILE((*LIBL/MOHDL35))
QRYSLT(&QRYSLT)
KEYFLD((*MAPFLD/CHARDT) (*MAPFLD/HDPROD))
MAPFLD((ZONEDT HDAEDT *ZONED 8 0) (CHARDT ZONEDT *CHAR 8)
(HDPROD '%SST(HDPROD 1 2) *CAT %SST(HDPROD 10 12)
*CAT %SST(HDPROD 13 16)'))
Some notes: I am guessing that HDAEDT is a PACKED number. If so, you don't need to map it to a ZONED number just to get it to a character value. If you need the ZONED value, that is ok (but PACKED should work just as well). Otherwise, you can just use:
MAPFLD((CHARDT HDAEDT *CHAR 8))
Also in your OVRDBF, you need to make sure you choose the correct Override Scope OVRSCOPE. The IBM default is OVRSCOPE(*ACTGRPDFN). OPNQRYF also has a scope OPNSCOPE. You need to make sure that the OVRSCOPE, the OPNSCOPE, and the program using the table all use the same activation group. There are a lot of different combinations. If you can't make it work, you can always change them all to *JOB, and that will work. But there is nothing intrinsic about OPNQRYF that prevents it from working from a CLP.
I would try creating a view with all the table fields plus the substring'd column, and then use OPNQRYF with that instead of the table, specifying the substring'd column as the KEYFLD. That would probably be simpler (& potentially quicker) than copying the whole lot into QTEMP every time.

AS400 SQL Dynamic Delete issue

Some background...
I have 20 + Files.
I read these file names from a prebuilt table building a subfile screen.
I select 1 file then build another screen with the contents of file selected.
I then select the record I want to delete, so far so good...
eval MySQL = stat3 + %trimr(scrwcrd) + STAT3B
my SQL Statement which reads in debug
MySQL = DELETE FROM FILESEL WHERE K00001 = ? with NC
PREPARE STAT3 from :MYSQL
EXECUTE STAT3 using :PROD
where :prod is the variable supplied from Screen selection
My sqlcod ends up at 100 with sqlstt = 2000 after the EXECUTE indicating ROW not found for Delete.
Now for a fact this is not the case. I see the record on the file selected and I see the value of PROD using debug any ideas...
What datatypes and length are the K00001 field and :PROD host variable?
Equality could be an issue. If they are character fields you may need to TRIM/%TRIM the values in order to match.

What is the best way to run N independent column updates in PostgreSQL? What is the best way to do it in the SQL spec?

I'm looking for a more efficient way to run many columns updates on the same table like this:
UPDATE TABLE table
SET col = regexp_replace( col, 'foo', 'bar' )
WHERE regexp_match( col, 'foo' );
Such that foo, and bar, will be a combination of 40 different regex-replaces. I doubt even 25% of the dataset needs to be updated at all, but what I'm wanting to know is it is possible to cleanly achieve the following in SQL.
A single pass update
A single match of the regex, triggers a single replace
Not running all possible regexp_replaces if only one matches
Not updating all columns if only one needs the update
Not updating a row if no column has changed
I'm also curious, I know in MySQL (bear with me)
UPDATE foo SET bar = 'baz'
Has an implicit WHERE bar != 'baz' clause
However, in PostgreSQL I know this doesn't exist: I think I could at least answer one of my questions if I knew how to skip a single row's update if the target columns weren't updated.
Something like
UPDATE TABLE table
SET col = *temp_var* = regexp_replace( col, 'foo', 'bar' )
WHERE col != *temp_var*
Do it in code. Open up a cursor, then: grab a row, run it through the 40 regular expressions, and if it changed, save it back. Repeat until the cursor doesn't give you any more rows.
Whether you do it that way or come up with the magical SQL expression, it's still going to be a row scan of the entire table, but the code will be much simpler.
Experimental Results
In response to criticism, I ran an experiment. I inserted 10,000 lines from a documentation file into a table with a serial primary key and a varchar column. Then I tested two ways to do the update. Method 1:
in a transaction:
opened up a cursor (select for update)
while reading 100 rows from the cursor returns any rows:
for each row:
for each regular expression:
do the gsub on the text column
update the row
This takes 1.16 seconds with a locally connected database.
Then the "big replace," a single mega-regex update:
update foo set t =
regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(t,
E'\bcommit\b', E'COMMIT'),
E'\b9acf10762b5f3d3b1b33ea07792a936a25e45010\b',
E'9ACF10762B5F3D3B1B33EA07792A936A25E45010'),
E'\bAuthor:\b', E'AUTHOR:'),
E'\bCarl\b', E'CARL'), E'\bWorth\b',
E'WORTH'), E'\b\b',
E''), E'\bDate:\b',
E'DATE:'), E'\bMon\b', E'MON'),
E'\bOct\b', E'OCT'), E'\b26\b',
E'26'), E'\b04:53:13\b', E'04:53:13'),
E'\b2009\b', E'2009'), E'\b-0700\b',
E'-0700'), E'\bUpdate\b', E'UPDATE'),
E'\bversion\b', E'VERSION'),
E'\bto\b', E'TO'), E'\b2.9.1\b',
E'2.9.1'), E'\bcommit\b', E'COMMIT'),
E'\b61c89e56f361fa860f18985137d6bf53f48c16ac\b',
E'61C89E56F361FA860F18985137D6BF53F48C16AC'),
E'\bAuthor:\b', E'AUTHOR:'),
E'\bCarl\b', E'CARL'), E'\bWorth\b',
E'WORTH'), E'\b\b',
E''), E'\bDate:\b',
E'DATE:'), E'\bMon\b', E'MON'),
E'\bOct\b', E'OCT'), E'\b26\b',
E'26'), E'\b04:51:58\b', E'04:51:58'),
E'\b2009\b', E'2009'), E'\b-0700\b',
E'-0700'), E'\bNEWS:\b', E'NEWS:'),
E'\bAdd\b', E'ADD'), E'\bnotes\b',
E'NOTES'), E'\bfor\b', E'FOR'),
E'\bthe\b', E'THE'), E'\b2.9.1\b',
E'2.9.1'), E'\brelease.\b',
E'RELEASE.'), E'\bThanks\b',
E'THANKS'), E'\bto\b', E'TO'),
E'\beveryone\b', E'EVERYONE'),
E'\bfor\b', E'FOR')
The mega-regex update takes 0.94 seconds to update.
At 0.94 seconds compared to 1.16, it's true that the mega-regex update is faster, running in 81% of the time of doing it in code. It is not, however a lot faster. And ye Gods, look at that update statement. Do you want to write that, or try to figure out what went wrong when Postgres complains that you dropped a parenthesis somewhere?
Code
The code used was:
def stupid_regex_replace
sql = Select.new
sql.select('id')
sql.select('t')
sql.for_update
sql.from(TABLE_NAME)
Cursor.new('foo', sql, {}, #db) do |cursor|
until (rows = cursor.fetch(100)).empty?
for row in rows
for regex, replacement in regexes
row['t'] = row['t'].gsub(regex, replacement)
end
end
sql = Update.new(TABLE_NAME, #db)
sql.set('t', row['t'])
sql.where(['id = %s', row['id']])
sql.exec
end
end
end
I generated the regular expressions dynamically by taking words from the file; for each word "foo", its regular expression was "\bfoo\b" and its replacement string was "FOO" (the word uppercased). I used words from the file to make sure that replacements did happen. I made the test program spit out the regex's so you can see them. Each pair is a regex and the corresponding replacement string:
[[/\bcommit\b/, "COMMIT"],
[/\b9acf10762b5f3d3b1b33ea07792a936a25e45010\b/,
"9ACF10762B5F3D3B1B33EA07792A936A25E45010"],
[/\bAuthor:\b/, "AUTHOR:"],
[/\bCarl\b/, "CARL"],
[/\bWorth\b/, "WORTH"],
[/\b<cworth#cworth.org>\b/, "<CWORTH#CWORTH.ORG>"],
[/\bDate:\b/, "DATE:"],
[/\bMon\b/, "MON"],
[/\bOct\b/, "OCT"],
[/\b26\b/, "26"],
[/\b04:53:13\b/, "04:53:13"],
[/\b2009\b/, "2009"],
[/\b-0700\b/, "-0700"],
[/\bUpdate\b/, "UPDATE"],
[/\bversion\b/, "VERSION"],
[/\bto\b/, "TO"],
[/\b2.9.1\b/, "2.9.1"],
[/\bcommit\b/, "COMMIT"],
[/\b61c89e56f361fa860f18985137d6bf53f48c16ac\b/,
"61C89E56F361FA860F18985137D6BF53F48C16AC"],
[/\bAuthor:\b/, "AUTHOR:"],
[/\bCarl\b/, "CARL"],
[/\bWorth\b/, "WORTH"],
[/\b<cworth#cworth.org>\b/, "<CWORTH#CWORTH.ORG>"],
[/\bDate:\b/, "DATE:"],
[/\bMon\b/, "MON"],
[/\bOct\b/, "OCT"],
[/\b26\b/, "26"],
[/\b04:51:58\b/, "04:51:58"],
[/\b2009\b/, "2009"],
[/\b-0700\b/, "-0700"],
[/\bNEWS:\b/, "NEWS:"],
[/\bAdd\b/, "ADD"],
[/\bnotes\b/, "NOTES"],
[/\bfor\b/, "FOR"],
[/\bthe\b/, "THE"],
[/\b2.9.1\b/, "2.9.1"],
[/\brelease.\b/, "RELEASE."],
[/\bThanks\b/, "THANKS"],
[/\bto\b/, "TO"],
[/\beveryone\b/, "EVERYONE"],
[/\bfor\b/, "FOR"]]
If this were a hand-generated list of regex's, and not automatically generated, my question is still appropriate: Which would you rather have to create or maintain?
For the skip update, look at suppress_redundant_updates - see http://www.postgresql.org/docs/8.4/static/functions-trigger.html.
This is not necessarily a win - but it might well be in your case.
Or perhaps you can just add that implicit check as an explicit one?