In my ABAP report I have some structure:
data:
begin of struct1,
field1 type char10,
end of struct1.
I can access to it's field field1 directly:
data(val) = struct1-field1
or dynamically with assign:
assign ('struct1-field1') to field-symbol(<val>).
Also I have some internal table:
data: table1 like standard table of struct1 with default key.
append initial line to table1.
I can access to column field1 of first row directly:
data(val) = table1[ 1 ]-field1.
But I can not get access to field1 with dynamic assign:
assign ('table1[ 1 ]-field1') to field-symbol(<val>).
After assignment sy-subrc equals "4".
Why?
The syntax of ASSIGN (syntax1) ... is not the same as the syntax of the Right-Hand Side (RHS) of assignments ... = syntax2.
The syntax for ASSIGN is explained in the documentation of ASSIGN (variable_containing_name) ... or ASSIGN ('name') ... (chapter 1. (name) of page ASSIGN - dynamic_dobj).
Here is an abstract of what is accepted:
"name can contain a chain of names consisting of component selectors [(-)]"
"the first name [can be] followed by an object component selector (->)"
"the first name [can be] followed by a class component selector (=>)"
No mention of table expressions, so they are forbidden. Same for meshes...
Concerning the RHS of assignments, as described in the documentation, it can be :
Data Objects
They can be attributes or components using selectors -, ->, =>, which can be chained multiple times (see Names for Individual Operands
Return values or results of functional methods, return values or results of built-in functions and constructor expressions, or return values or results of table expressions
Results of calculation expressions
Sandra is absolutely right, if table expressions are not specified in help, then they are not allowed.
You can use ASSIGN COMPONENT statement for your dynamicity:
FIELD-SYMBOLS: <tab> TYPE INDEX TABLE.
ASSIGN ('table1') TO <tab>.
ASSIGN COMPONENT 'field1' OF STRUCTURE <tab>[ 1 ] TO FIELD-SYMBOL(<val>).
However, such dynamics is only possible with index tables (standard + sorted) due to the nature of this version of row specification. If you try to pass hashed table into the field symbol, it will dump.
DB2/400 SQL : I'm working in a sql function witch is using a global temporary table. I've juste a problem to declare this table : SQL send me an error, but i don't see where is the problem ? Can someone telle me what 's about this error ?
Function with a declaring global temporary table
I don't speak or read french, but it appears that the error is telling you that your function definition expects a return value, but the function body does not return anything.
Now when creating a SQL scalar function, the function body starts out with declarations. These declarations are for variables and cursors. It is somewhat unfortunate that you create a global temporary table using a statement that starts with DECLARE. It doesn't belong in the declarations but in the procedure body.
.-NOT ATOMIC-.
>>-+--------+--BEGIN--+------------+---------------------------->
'-label:-' '-ATOMIC-----'
>--+---------------------------------------+-------------------->
| .-----------------------------------. |
| V | |
'---+-SQL-variable-declaration--+-- ; +-'
+-SQL-condition-declaration-+
+-return-codes-declaration--+
'-INCLUDE-statement---------'
>--+--------------------------------------+--------------------->
| .----------------------------------. |
| V | |
'---+-DECLARE CURSOR-statement-+-- ; +-'
'-INCLUDE-statement--------'
>--+---------------------------------+-------------------------->
| .-----------------------------. |
| V | |
'---+-handler-declaration-+-- ; +-'
'-INCLUDE-statement---'
.---------------------------------.
V |
>----+-----------------------------+-+--END--+-------+---------><
'-SQL-procedure-statement-- ; ' '-label-'
It is part of the SQL procedure statement!
The declaration section must come before the creation of any tables.
While it seems counter intuitive, DECLARE GLOBAL TEMPORARY TABLE is not a normal variable declaration.
Try moving the other variable declarations above the table declaration.
This is a continuation of my question JMeter: How to benchmark data deletion from database table in batches?. I adopted the solution proposed by Dimitri T, and now I am using the test case for Oracle DB.
My test case starts by inserting 1000 entries, using 100 threads in 10 loops. After that, it deletes where rownum < 250.
However, my test case is no longer able to detect that the table is empty. When I view the response data in my result tree, I see the following:
249 updates
249 updates
249 updates
249 updates
4 updates
0 updates
0 updates
0 updates
...
JMeter itself does not report any errors occurring.
My thread group looks like this:
...(separate thread group to do INSERT)...
Thread Group: Do DELETE
Txn Ctrl: DELETE
While Loop: tbl still has data
JDBC Request: DELETE from tbl
JDBC PostProcessor
Loop condition is defined as follows:
${__jexl3('${count_1} > 0',)}
(As a slight change from the solution in my previous question, I surrounded the condition in quotes to prevent the ambiguous expression error from showing up.)
JDBC request is defined as follows:
delete from tbl where (entrydt < ${endDt}) and (rownum < ${deleteLimit})
(User-defined variables endDt and deleteLimit have values TO_DATE('2019/02/01 00:00:00', 'yyyy/mm/dd hh24:mi:ss') and 250 respectively.)
JDBC post-procesor is defined as follows:
[Select statement] select count(*) from tbl
Variable names: count (All other fields are empty.)
Handle ResultSet: Store as String
I have tried
Changing the loop condition; the quotes do not matter; neither do changing ${count_1} to ${count}
Changing the post-processor; between variable names, result variable name, and how the result set is handled
who told you to surround the expression with quotation marks? First you break the working solution and then you're complaining that the expression provided doesn't work.
Just define count_1 as as positive number via User Defined Variables and you will not have any errors in jmeter.log
If for some reason you're not willing to do this or cannot do this - you can consider migrating to __groovy() function with an extra check of count_1 variable being set like:
${__groovy(vars.get('count_1') == null || (vars.get('count_1') as int) > 0,)}
I am trying to search for a double condition using Proc SQl.
I want to search for when 'TTT' AND 'RRR' exist in v1,v2,v3,v4...v10
so it can be that TTT is in V1 and RRR is in v10 or in V1 and V2. There are many combinations but I don't want to type each combination out obviously.
At the same time I want to also use an OR statement to search for variables from v1-v10 that either contain ('TTT' AND 'RRR') OR ('GGG') alone. I've been searching around, thought maybe a case when would work but I don't think so and also I need to make it a PROC SQL.
I know the below code is wrong as it's a longer version but, just to get the jist of what I mean:
WHERE date BETWEEN &start. AND &end.
AND ( ((V1 = 'TTT' and V2 ='RRR') OR (V1 = 'GGG')) OR ((V1 = 'TTT' and V3 ='RRR') OR (V1 = 'GGG')) OR ((V1 = 'TTT' and V4 ='RRR') OR (V1 = 'GGG')) ...)
Thanks,
Much Appreciated!
UPDATED version based on #Tom's Answer
data
diag_table;
input v1 $ v2 $ v3 $ v4 $ v5 $;
cards;
TTT . . RRR .
GGG . . . .
. RRR . . TTT
. . . . .
FFF . . . .
. . RRR1 . .
TTT . . GGG .
. RRR . GGG .
run;
proc print data=diag_table;
quit;
proc sql;
create table diag_found as
select *
from diag_table
WHERE (whichc('TTT',v1,v2,v3,v4,v5) and whichc('RRR',v1,v2,v3,v4,v5)) or (whichc('GGG',v1,v2,v3,v4,v5));
quit;
proc print data=diag_found;
quit;
The only problem with this code is that it's also grabbing cases where the rows
contain GGG + RRR, and GGG + TTT
I tried adding parentheses around the two groups but it didn't change anything.
UPDATE:
#Tom and #Joe:
Yes, I guess I do mean an XOR with an AND inside.
So if A=TTT B=RRR C=GGG
(A AND B) XOR C
Either A+B combination OR C
Thank you!!
You can use the WHICHC() function to do what you want.
where whichc('TTT',v1,v2) and whichc('RRR',v1,v2)
Note it is a little harder to use in SQL since you cannot use variable lists so you will need to explicitly list each variable name, whichc('TTT',v1,v2,v3,v4,v5), instead of just using a variable list,whichc('TTT', of v1-v5) , like you could in a data step.
Not sure what you mean by GGG alone. But if by that you mean GGG without TTT or RRR then you could use logic like this.
where (whichc('TTT',v1,v2) and whichc('RRR',v1,v2))
or (whichc('GGG',v1,v) and not (whichc('TTT',v1,v2) or whichc('RRR',v1,v2)))
I don't think you need anything particularly complicated here; the key concept which you'll see in user2877959's answer as well as mine is to concatenate the strings into one long string, so you only need a single call (or in my case, three simple calls). Easier in data step than in sql, but it will work either way.
I use CATX with a delimiter to make sure "RR" | "RTTT" does not match, also. Then we just use FIND to find the string that matches.
data have;
input (v1-v10) (:$3.);
datalines;
AAA BBB CCC DDD EEE FFF GGG HHH III JJJ
KKK LLL MMM NNN OOO PPP QQQ RRR SSS TTT
UUU VVV WWW XXX YYY ZZZ AAA BBB CCC DDD
GGG TTT RRR AAA BBB CCC DDD EEE FFF HHH
;;;;
run;
proc sql;
select * from have
WHERE (
find(catx('|',v1,v2,v3,v4,v5,v6,v7,v8,v9,v10),'RRR') and
find(catx('|',v1,v2,v3,v4,v5,v6,v7,v8,v9,v10),'TTT')
) ne
(find(catx('|',v1,v2,v3,v4,v5,v6,v7,v8,v9,v10),'GGG') > 0
)
;
quit;
Of course the view could process the WHERE clause if you wanted it to also.
I've modified the code to add the XOR combination; basically it either contains GGG or TTT+RRR but not GGG+TTT+RRR. This works by simply comparing the two boolean results (note I add >0 to the second to get it to evaluate to a true/false; the first will already evaluate to true/false thanks to and).
If you actually want GGG+RRR to be excluded, you'll have to add some additional criteria; you may simply be better off assigning the values of 'has RRR', 'has TTT', and 'has GGG' to three variables (either in a view, or in the PROC SQL SELECT query) and then evaluate those rather than having to do a bunch of find/whichn/etc.
You could try regular expressions. Something in the line of this:
where prxmatch('/(TTT.*RRR|RRR.*TTT)|GGG/',cats(v1,v2,v3,v4,v5,v6,v7,v8,v9,v10));
I'm looking for a more efficient way to run many columns updates on the same table like this:
UPDATE TABLE table
SET col = regexp_replace( col, 'foo', 'bar' )
WHERE regexp_match( col, 'foo' );
Such that foo, and bar, will be a combination of 40 different regex-replaces. I doubt even 25% of the dataset needs to be updated at all, but what I'm wanting to know is it is possible to cleanly achieve the following in SQL.
A single pass update
A single match of the regex, triggers a single replace
Not running all possible regexp_replaces if only one matches
Not updating all columns if only one needs the update
Not updating a row if no column has changed
I'm also curious, I know in MySQL (bear with me)
UPDATE foo SET bar = 'baz'
Has an implicit WHERE bar != 'baz' clause
However, in PostgreSQL I know this doesn't exist: I think I could at least answer one of my questions if I knew how to skip a single row's update if the target columns weren't updated.
Something like
UPDATE TABLE table
SET col = *temp_var* = regexp_replace( col, 'foo', 'bar' )
WHERE col != *temp_var*
Do it in code. Open up a cursor, then: grab a row, run it through the 40 regular expressions, and if it changed, save it back. Repeat until the cursor doesn't give you any more rows.
Whether you do it that way or come up with the magical SQL expression, it's still going to be a row scan of the entire table, but the code will be much simpler.
Experimental Results
In response to criticism, I ran an experiment. I inserted 10,000 lines from a documentation file into a table with a serial primary key and a varchar column. Then I tested two ways to do the update. Method 1:
in a transaction:
opened up a cursor (select for update)
while reading 100 rows from the cursor returns any rows:
for each row:
for each regular expression:
do the gsub on the text column
update the row
This takes 1.16 seconds with a locally connected database.
Then the "big replace," a single mega-regex update:
update foo set t =
regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(t,
E'\bcommit\b', E'COMMIT'),
E'\b9acf10762b5f3d3b1b33ea07792a936a25e45010\b',
E'9ACF10762B5F3D3B1B33EA07792A936A25E45010'),
E'\bAuthor:\b', E'AUTHOR:'),
E'\bCarl\b', E'CARL'), E'\bWorth\b',
E'WORTH'), E'\b\b',
E''), E'\bDate:\b',
E'DATE:'), E'\bMon\b', E'MON'),
E'\bOct\b', E'OCT'), E'\b26\b',
E'26'), E'\b04:53:13\b', E'04:53:13'),
E'\b2009\b', E'2009'), E'\b-0700\b',
E'-0700'), E'\bUpdate\b', E'UPDATE'),
E'\bversion\b', E'VERSION'),
E'\bto\b', E'TO'), E'\b2.9.1\b',
E'2.9.1'), E'\bcommit\b', E'COMMIT'),
E'\b61c89e56f361fa860f18985137d6bf53f48c16ac\b',
E'61C89E56F361FA860F18985137D6BF53F48C16AC'),
E'\bAuthor:\b', E'AUTHOR:'),
E'\bCarl\b', E'CARL'), E'\bWorth\b',
E'WORTH'), E'\b\b',
E''), E'\bDate:\b',
E'DATE:'), E'\bMon\b', E'MON'),
E'\bOct\b', E'OCT'), E'\b26\b',
E'26'), E'\b04:51:58\b', E'04:51:58'),
E'\b2009\b', E'2009'), E'\b-0700\b',
E'-0700'), E'\bNEWS:\b', E'NEWS:'),
E'\bAdd\b', E'ADD'), E'\bnotes\b',
E'NOTES'), E'\bfor\b', E'FOR'),
E'\bthe\b', E'THE'), E'\b2.9.1\b',
E'2.9.1'), E'\brelease.\b',
E'RELEASE.'), E'\bThanks\b',
E'THANKS'), E'\bto\b', E'TO'),
E'\beveryone\b', E'EVERYONE'),
E'\bfor\b', E'FOR')
The mega-regex update takes 0.94 seconds to update.
At 0.94 seconds compared to 1.16, it's true that the mega-regex update is faster, running in 81% of the time of doing it in code. It is not, however a lot faster. And ye Gods, look at that update statement. Do you want to write that, or try to figure out what went wrong when Postgres complains that you dropped a parenthesis somewhere?
Code
The code used was:
def stupid_regex_replace
sql = Select.new
sql.select('id')
sql.select('t')
sql.for_update
sql.from(TABLE_NAME)
Cursor.new('foo', sql, {}, #db) do |cursor|
until (rows = cursor.fetch(100)).empty?
for row in rows
for regex, replacement in regexes
row['t'] = row['t'].gsub(regex, replacement)
end
end
sql = Update.new(TABLE_NAME, #db)
sql.set('t', row['t'])
sql.where(['id = %s', row['id']])
sql.exec
end
end
end
I generated the regular expressions dynamically by taking words from the file; for each word "foo", its regular expression was "\bfoo\b" and its replacement string was "FOO" (the word uppercased). I used words from the file to make sure that replacements did happen. I made the test program spit out the regex's so you can see them. Each pair is a regex and the corresponding replacement string:
[[/\bcommit\b/, "COMMIT"],
[/\b9acf10762b5f3d3b1b33ea07792a936a25e45010\b/,
"9ACF10762B5F3D3B1B33EA07792A936A25E45010"],
[/\bAuthor:\b/, "AUTHOR:"],
[/\bCarl\b/, "CARL"],
[/\bWorth\b/, "WORTH"],
[/\b<cworth#cworth.org>\b/, "<CWORTH#CWORTH.ORG>"],
[/\bDate:\b/, "DATE:"],
[/\bMon\b/, "MON"],
[/\bOct\b/, "OCT"],
[/\b26\b/, "26"],
[/\b04:53:13\b/, "04:53:13"],
[/\b2009\b/, "2009"],
[/\b-0700\b/, "-0700"],
[/\bUpdate\b/, "UPDATE"],
[/\bversion\b/, "VERSION"],
[/\bto\b/, "TO"],
[/\b2.9.1\b/, "2.9.1"],
[/\bcommit\b/, "COMMIT"],
[/\b61c89e56f361fa860f18985137d6bf53f48c16ac\b/,
"61C89E56F361FA860F18985137D6BF53F48C16AC"],
[/\bAuthor:\b/, "AUTHOR:"],
[/\bCarl\b/, "CARL"],
[/\bWorth\b/, "WORTH"],
[/\b<cworth#cworth.org>\b/, "<CWORTH#CWORTH.ORG>"],
[/\bDate:\b/, "DATE:"],
[/\bMon\b/, "MON"],
[/\bOct\b/, "OCT"],
[/\b26\b/, "26"],
[/\b04:51:58\b/, "04:51:58"],
[/\b2009\b/, "2009"],
[/\b-0700\b/, "-0700"],
[/\bNEWS:\b/, "NEWS:"],
[/\bAdd\b/, "ADD"],
[/\bnotes\b/, "NOTES"],
[/\bfor\b/, "FOR"],
[/\bthe\b/, "THE"],
[/\b2.9.1\b/, "2.9.1"],
[/\brelease.\b/, "RELEASE."],
[/\bThanks\b/, "THANKS"],
[/\bto\b/, "TO"],
[/\beveryone\b/, "EVERYONE"],
[/\bfor\b/, "FOR"]]
If this were a hand-generated list of regex's, and not automatically generated, my question is still appropriate: Which would you rather have to create or maintain?
For the skip update, look at suppress_redundant_updates - see http://www.postgresql.org/docs/8.4/static/functions-trigger.html.
This is not necessarily a win - but it might well be in your case.
Or perhaps you can just add that implicit check as an explicit one?