sap hana placeholders pass * parameter with arrow notation - sql

Trying to pass a star (*) in a sql Hana place holder with an arrow notation
The following works OK:
Select * FROM "table_1"
( PLACEHOLDER."$$IP_ShipmentStartDate$$" => '2020-01-01',
PLACEHOLDER."$$IP_ShipmentEndDate$$" => '2030-01-01' )
In the following, when trying to pass a *, i get a syntax error:
Select * FROM "table1"
( PLACEHOLDER."$$IP_ShipmentStartDate$$" => '2020-01-01',
PLACEHOLDER.'$$IP_ItemTypecd$$' => '''*''',
PLACEHOLDER."$$IP_ShipmentEndDate$$" => '2030-01-01' )
The reason i am using the arrow notation, is since its the only way i know that allows passing parameters as in the example bellow: (as in linked post)
do begin
declare lv_param nvarchar(100);
select max('some_date')
into lv_param
from dummy /*your_table*/;
select * from "_SYS_BIC"."path.to.your.view/CV_TEST" (
PLACEHOLDER."$$P_DUMMY$$" => :lv_param
);
end;

There's a typo in your code. You need to use double quotes around parameter name, but you have a single quote. It should be: PLACEHOLDER."$$IP_ItemTypecd$$".
When you pass something to Calculation View's parameter, you already have a string, that will be treated as string and have quotes around it where they needed, so no need to add more. But if you really need to pass some quotes inside the placeholder's value you also need to escape them with backslash complementary to doubling them (it was found by doing data preview on calculation view and entering '*' as a value of input parameter, then you'll find valid SQL statement in the log of preview):
do
begin
select *
from "_SYS_BIC"."ztest/CV_TEST_PERF"(
PLACEHOLDER."$$P_DUMMY$$" => '''*'''
);
end;
/*
SAP DBTech JDBC: [339]: invalid number: : line 3 col 3 (at pos 13): invalid number:
not a valid number string '' at function __typecast__()
*/
/*And in trace there's no more information, but interesting part
is preparation step, not an execution
w SQLScriptExecuto se_eapi_proxy.cc(00145) : Error <exception 71000339:
not a valid number string '' at function __typecast__()
> in preparation of internal statement:
*/
do
begin
select *
from "_SYS_BIC"."ztest/CV_TEST_PERF"(
PLACEHOLDER."$$P_DUMMY$$" => '\'*\''
);
end;
/*
SAP DBTech JDBC: [257]: sql syntax error: incorrect syntax near "\": line 5 col 38 (at pos 121)
*/
But this is ok:
do
begin
select *
from "_SYS_BIC"."ztest/CV_TEST_PERF"(
PLACEHOLDER."$$P_DUMMY$$" => '\''*\'''
);
end;
LOG_ID | DATUM | INPUT_PARAM | CUR_DATE
--------------------------+----------+-------------+---------
8IPYSJ23JLVZATTQYYBUYMZ9V | 20201224 | '*' | 20201224
3APKAAC9OGGM2T78TO3WUUBYR | 20201224 | '*' | 20201224
F0QVK7BVUU5IQJRI2Q9QLY0WJ | 20201224 | '*' | 20201224
CW8ISV4YIAS8CEIY8SNMYMSYB | 20201224 | '*' | 20201224
What about the star itself:
As #LarsBr already said, in SQL you need to use LIKE '%pattern%' to search for strings contains parretn in the middle, % is equivalent for ABAP's * (but as far as I know * is more verbose placeholder in non-SQL world). So there's no out-of-the-box conversion of FIELD = '*' to FIELD like '%' or something similar.
But there's no LIKE predicate in Column Engine (in filter or in calculated column).
If you really need LIKE functionality in filter or calculated column, you can:
Switch execution engine to SQL
Or use match(arg, pattern) function of Column Engine, which now dissapeared from the pallete and is hidden quite well in the documentation (here, at the very end of the page, after digging into the description field of the last row in the table, you'll find the actual syntax for it. Damn!).
But here you'll meet another surprise: as long as Column Engine has different operators than SQL (it is more internal and more close to the DB core), it uses star (*) for wildcard character. So for match(string, pattern) you need to use a star again: match('pat string tern', 'pat*tern').
After all the above said: there are cases where you can really want to search for data with wildcards and pass them as parameter. But then you need to use match and pass the parameter as plain text without tricks on star (*) or something (if you want to use officially supported features, not trying to exploit some internals).
After adding this filter to RSPCLOGCHAIN projection node of my CV from the previous thread, it works this way:
do
begin
select *
from "_SYS_BIC"."ztest/CV_TEST_PERF"(
PLACEHOLDER."$$P_DUMMY$$" => 'CW*'
);
end;
LOG_ID | DATUM | INPUT_PARAM | CUR_DATE
--------------------------+----------+-------------+---------
CW8ISV4YIAS8CEIY8SNMYMSYB | 20201224 | CW* | 20201224
do
begin
select *
from "_SYS_BIC"."ztest/CV_TEST_PERF"(
PLACEHOLDER."$$P_DUMMY$$" => 'CW'
);
end;
/*
Fetched 0 row(s) in 0 ms 0 µs (server processing time: 0 ms 0 µs)
*/

The notation with triple quotation marks '''*''' is likely what yields the syntax error here.
Instead, use single quotation marks to provide the '*' string.
But that is just half of the challenge here.
In SQL, the placeholder search is done via LIKE and the placeholder character is %, not *.
To mimic the ABAP behaviour when using calculation views, the input parameters must be used in filter expressions in the calculation view. And these filter expressions have to check for whether the input parameter value is * or not. If it is * then the filter condition needs to be a LIKE, otherwise an = (equal) condition.
A final comment: the PLACEHOLDER-syntax really only works with calculation views and not with tables.

Related

SQL "where" clause failing with R JDBC HANA connection

I've had nothing but trouble connecting to my company's HANA db through R but finally had a breakthrough, however now my sql statement is failing in subsetting data using a "where" statement.
The following returns a data frame of 10 observations across 9 variables
# Fetch all results
rs <- dbSendQuery(jdbcConnection, 'SELECT TOP 10
VISITTYPE,
ACCOUNT,
PLANNEDSTART,
PLANNEDEND,
EXECUTIONSTART,
EXECUTIONEND,
STATUS,
SOURCE,
ACCOUNT_NAME
FROM "_SYS_BIC"."cona-reporting.field-sales/Q_CA_R_SpringVisit"')
a <- dbFetch(rs)
However when I throw a where into it, I receive an error.
rs <- dbSendQuery(jdbcConnection, 'SELECT TOP 10
VISITTYPE,
ACCOUNT,
PLANNEDSTART,
PLANNEDEND,
EXECUTIONSTART,
EXECUTIONEND,
STATUS,
SOURCE,
ACCOUNT_NAME
FROM "_SYS_BIC"."cona-reporting.field-sales/Q_CA_R_SpringVisit" WHERE VISITTYPE = ZR')
Error in .verify.JDBC.result(r, "Unable to retrieve JDBC result set for ", :
Unable to retrieve JDBC result set for SELECT TOP 10
VISITTYPE,
ACCOUNT,
PLANNEDSTART,
PLANNEDEND,
EXECUTIONSTART,
EXECUTIONEND,
STATUS,
SOURCE,
ACCOUNT_NAME
FROM "_SYS_BIC"."cona-reporting.field-sales/Q_CA_R_SpringVisit" WHERE VISITTYPE = ZR (SAP DBTech JDBC: [260] (at 222): invalid column name: ZR: line 11 col 101 (at pos 222))
What does this mean? ZR is not a column, it is a value within the column. Tried placing ZR in quotes to no other effect.
My double and single quote syntax is based on this other question I've asked.
Issues connecting R to HANA db with many special characters
Never got it working with RODBC so tried JODBC.
Likely it is handling of quotes within an embedded quote-enclosed string, further complicated by the double quote symbols used in SQL for identifiers. However, consider parameterization (an industry best practice whenever running SQL in application layer such as R) to avoid the need of quote punctuation or concatenation. Like most JDBC APIs, RJDBC supports parameterization. Also note, dbGetQuery summarily equates to dbSendQuery + dbFetch:
sql <- 'SELECT TOP 10 VISITTYPE,
ACCOUNT,
PLANNEDSTART,
PLANNEDEND,
EXECUTIONSTART,
EXECUTIONEND,
STATUS,
SOURCE,
ACCOUNT_NAME
FROM "_SYS_BIC"."cona-reporting.field-sales/Q_CA_R_SpringVisit"
WHERE VISITTYPE = ?'
param <- 'ZR'
df <- dbGetQuery(jdbcConnection, sql, param)
To complete the previous answer (which is of course preferable as it uses bind variables) here is described the *root cause** of the problem:
The use of a single quote in a single quoted string must be of course escaped
Contrary to the Oracle escaping using doubling the quote R uses backslash.
i.e. the proper usage is as follows:
> df <- dbGetQuery(jdbcConnection,
+ 'select * from "DUAL" where "DUMMY" = \'X\'')
> df
DUMMY
1 X
alternative way using double quoted string
> df <- dbGetQuery(jdbcConnection,
+ "select * from \"DUAL\" where \"DUMMY\" = 'X'")
> df
DUMMY
1 X

How to use ARRAY contains operator with ANY

I have a table where one column is array:
CREATE TABLE inherited_tags (
id serial,
tags text[]
);
Sample values:
INSERT INTO inherited_tags (tags) VALUES
(ARRAY['A','B','C']), -- id: 1
(ARRAY['D','E']), -- id: 2
(ARRAY['A','B']), -- id: 3
(ARRAY['C','D']), -- id: 4
(ARRAY['D','F']), -- id: 5
(ARRAY['A']); -- id: 6
I want to find rows which tags column contains some subset of words inside array. For example for input:
ARRAY[ARRAY['A','C'], ARRAY['F'], ARRAY['E']]::text[][]
I want to find all rows that contain ('A' and 'C') OR ('F') OR ('E'). So for example above I should get rows with ids: 1, 2, 5.
I was hoping that I could use syntax like this:
SELECT * FROM inherited_tags WHERE
tags #> ANY(ARRAY[ARRAY['A','C'], ARRAY['F'], ARRAY['E']]::text[][])
but I get error:
ERROR: operator does not exist: text[] #> text
LINE 1: SELECT * FROM inherited_tags where tags <# ANY(ARRAY[ARRAY['...
Postgres 9.6
plpgsql solution is acceptable but SQL is preferred.
DB-FIDDLE: https://www.db-fiddle.com/f/cKCr7Sfab6u8rqaCHhJvPk/0
The problem comes from the fact that the text[] and text[][] data types are internally the same data type. An array has a base type and dimensions, and the ANY operator will always extract the base type to compare, which will always be text and not text[]. It doesn't help that multidimensional arrays require that each subelement has the same length as every other. You can have ARRAY[ARRAY['A','C'],ARRAY['B','N']], but not ARRAY[ARRAY[2,3],ARRAY[1]].
In short, there is no direct way to make that particular query work. I tried to create a function and an operator for this as well, and that doesn't work, either, for different reasons. See how that went:
CREATE OR REPLACE FUNCTION check_tag_matches(
IN leftside text[],
IN rightside text)
RETURNS BOOLEAN AS
$BODY$
DECLARE rightarr text[];
BEGIN
SELECT CAST(rightside as text[]) INTO rightarr;
RETURN SELECT leftside #> rightarr;
END;
$BODY$
LANGUAGE plpgsql STABLE;
CREATE OPERATOR public.>>(
PROCEDURE = check_tag_matches,
LEFTARG = text[],
RIGHTARG = text,
COMMUTATOR = >>);
Then when testing it:
test=# SELECT * FROM inherited_tags WHERE
tags >> ANY(ARRAY[ARRAY['A','M'], ARRAY['F','E'], ARRAY['E','R']]::text[][]);
ERROR: malformed array literal: "A"
DETAIL: Array value must start with "{" or dimension information.
CONTEXT: SQL statement "SELECT CAST(rightside as text[])"
PL/pgSQL function check_tag_matches(text[],text) line 4 at SQL statement
It seems that when you try using a multidimensional array like ARRAY[ARRAY['A','M'], ARRAY['F','E'], ARRAY['E','R']]::text[][] in ANY(), it iterates not over ARRAY['A','M'], then ARRAY['F','E'], then ARRAY['E','R'], but over 'A','M','F','E','E','R'. The same thing happens when with unnest.
test=# SELECT unnest(ARRAY[ARRAY['A','M'], ARRAY['F','E'], ARRAY['E','R']]::text[][]);
unnest
--------
A
M
F
E
E
R
(6 rows)
Your remaining optiona are to define a function that will read array_length(rightside,1) and array_length(rightside,2) and use nested loops to check it all, or you can send multiple queries to get the inherited tags for each tag, or restructure your data somehow. And you can't even access the ARRAY['A','M'] element using rightside[1] to iterate over it, you're forced to go to the deepest level.
I don't think you can do that with a single condition because of the "contains A and C" requirement.
SELECT *
FROM inherited_tags
WHERE tags #> ARRAY['A','C']
OR tags && array['F', 'E'];
tags #> ARRAY['A','C'] selects those where tags contains all elements from ARRAY['A','C'] and tags && array['F', 'E'] selects those rows that contain at least one of the tags from array['F', 'E']
Updated DB Fiddle: https://www.db-fiddle.com/f/rXsjqEN3ry67uxJtEs3GM9/0
u can try
SELECT * FROM table WHERE
tags #> ARRAY['A','C']::varchar[]
OR
tags #> ARRAY['E']::varchar[]
OR
tags #> ARRAY['F']::varchar[]

Create table name in Hive using variable subsitution

I'd like to create a table name in Hive using variable substitution.
E.g.
SET market = "AUS";
create table ${hiveconf:market_cd}_active as ... ;
But it fails. Any idea how it can be achieved?
You should use backtrics (``) for name for that, like:
SET market=AUS;
CREATE TABLE `${hiveconf:market}_active` AS SELECT 1;
DESCRIBE `${hiveconf:market}_active`;
Example run script.sql from beeline:
$ beeline -u jdbc:hive2://localhost:10000/ -n hadoop -f script.sql
Connecting to jdbc:hive2://localhost:10000/
...
0: jdbc:hive2://localhost:10000/> SET market=AUS;
No rows affected (0.057 seconds)
0: jdbc:hive2://localhost:10000/> CREATE TABLE `${hiveconf:market}_active` AS SELECT 1;
...
INFO : Dag name: CREATE TABLE `AUS_active` AS SELECT 1(Stage-1)
...
INFO : OK
No rows affected (12.402 seconds)
0: jdbc:hive2://localhost:10000/> DESCRIBE `${hiveconf:market}_active`;
...
INFO : Executing command(queryId=hive_20190801194250_1a57e6ec-25e7-474d-b31d-24026f171089): DESCRIBE `AUS_active`
...
INFO : OK
+-----------+------------+----------+
| col_name | data_type | comment |
+-----------+------------+----------+
| _c0 | int | |
+-----------+------------+----------+
1 row selected (0.132 seconds)
0: jdbc:hive2://localhost:10000/> Closing: 0: jdbc:hive2://localhost:10000/
Markovitz's criticisms are correct, but do not produce a correct solution. In summary, you can use variable substitution for things like string comparisons, but NOT for things like naming variables and tables. If you know much about language compilers and parsers, you get a sense of why this would be true. You could construct such behavior in a language like Java, but SQL is just too crude.
Running that code produces an error, "cannot recognize input near '$' '{' 'hiveconf' in table name".(I am running Hortonworks, Hive 1.2.1000.2.5.3.0-37).
I spent a couple hours Googling and experimenting with different combinations of punctuation, different tools ranging from command line, Ambari, and DB Visualizer, etc., and I never found any way to construct a table name or a field name with a variable value. I think you're stuck with using variables in places where you need a string literal, like comparisons, but you cannot use them in place of reserved words or existing data structures, if that makes sense. By example:
--works
drop table if exists user_rgksp0.foo;
-- Does NOT work:
set MY_FILE_NAME=user_rgksp0.foo;
--drop table if exists ${hiveconf:MY_FILE_NAME};
-- Works
set REPORT_YEAR=2018;
select count(1) as stationary_event_count, day, zip_code, route_id from aaetl_dms_pub.dms_stationary_events_pub
where part_year = '${hiveconf:REPORT_YEAR}'
-- Does NOT Work:
set MY_VAR_NAME='zip_code'
select count(1) as stationary_event_count, day, '${hiveconf:MY_VAR_NAME}', route_id from aaetl_dms_pub.dms_stationary_events_pub
where part_year = 2018
The qualifies should be removed
You're using the wrong variable name
SET market=AUS; create table ${hiveconf:market}_active as select 1;

Rails 4 querying against postgresql column with array data type error

I am trying to query a table with a column with the postgresql array data type in Rails 4.
Here is the table schema:
create_table "db_of_exercises", force: true do |t|
t.text "preparation"
t.text "execution"
t.string "category"
t.datetime "created_at"
t.datetime "updated_at"
t.string "name"
t.string "body_part", default: [], array: true
t.hstore "muscle_groups"
t.string "equipment_type", default: [], array: true
end
The following query works:
SELECT * FROM db_of_exercises WHERE ('Arms') = ANY (body_part);
However, this query does not:
SELECT * FROM db_of_exercises WHERE ('Arms', 'Chest') = ANY (body_part);
It throws this error:
ERROR: operator does not exist: record = character varying
This does not work for me either:
SELECT * FROM "db_of_exercises" WHERE "body_part" IN ('Arms', 'Chest');
That throws this error:
ERROR: array value must start with "{" or dimension information
So, how do I query a column with an array data type in ActiveRecord??
What I have right now is:
#exercises = DbOfExercise.where(body_part: params[:body_parts])
I want to be able to query records that have more than one body_part associated with them, which was the whole point of using an array data type, so if someone could enlighten me on how to do this that would be awesome. I don't see it anywhere in the docs.
Final solution for posterity:
Using the overlap operator (&&):
SELECT * FROM db_of_exercises WHERE ARRAY['Arms', 'Chest'] && body_part;
I was getting this error:
ERROR: operator does not exist: text[] && character varying[]
so I typecasted ARRAY['Arms', 'Chest'] to varchar:
SELECT * FROM db_of_exercises WHERE ARRAY['Arms', 'Chest']::varchar[] && body_part;
and that worked.
I don't think that it has related to rails.
What if you do the follow?
SELECT * FROM db_of_exercises WHERE 'Arms' = ANY (body_part) OR 'Chest' = ANY (body_part)
I know that rails 4 supports Postgresql ARRAY datatype, but I'm not sure if ActiveRecord creates new methods for query the datatype. Maybe you can use Array Overlap I mean the && operator and then doind something like:
WHERE ARRAY['Arms', 'Chest'] && body_part
or maybe give a look to this gem: https://github.com/dockyard/postgres_ext/blob/master/docs/querying.md
And then do a query like:
DBOfExercise.where.overlap(:body_part => params[:body_parts])
#Aguardientico is absolutely right that what you want is the array overlaps operator &&. I'm following up with some more explanation, but would prefer you to accept that answer, not this one.
Anonymous rows (records)
The construct ('item1', 'item2', ...) is a row-constructor unless it appears in an IN (...) list. It creates an anonymous row, which PostgreSQL calls a "record". The error:
ERROR: operator does not exist: record = character varying
is because ('Arms', 'Chest') is being interpreted as if it were ROW('Arms', 'Chest'), which produces a single record value:
craig=> SELECT ('Arms', 'Chest'), ROW('Arms', 'Chest'), pg_typeof(('Arms', 'Chest'));
row | row | pg_typeof
--------------+--------------+-----------
(Arms,Chest) | (Arms,Chest) | record
(1 row)
and PostgreSQL has no idea how it's supposed to compare that to a string.
I don't really like this behaviour; I'd prefer it if PostgreSQL required you to explicitly use a ROW() constructor when you want an anonymous row. I expect that the behaviour shown here exists to support SET (col1,col2,col3) = (val1,val2,val3) and other similar operations where a ROW(...) constructor wouldn't make as much sense.
But the same thing with a single item works?
The reason the single ('Arms') case works is because unless there's a comma it's just a single parenthesised value where the parentheses are redundant and may be ignored:
craig=> SELECT ('Arms'), ROW('Arms'), pg_typeof(('Arms')), pg_typeof(ROW('Arms'));
?column? | row | pg_typeof | pg_typeof
----------+--------+-----------+-----------
Arms | (Arms) | unknown | record
(1 row)
Don't be alarmed by the type unknown. It just means that it's a string literal that hasn't yet had a type applied:
craig=> SELECT pg_typeof('blah');
pg_typeof
-----------
unknown
(1 row)
Compare array to scalar
This:
SELECT * FROM "db_of_exercises" WHERE "body_part" IN ('Arms', 'Chest');
fails with:
ERROR: array value must start with "{" or dimension information
because of implicit casting. The type of the body_part column is text[] (or varchar[]; same thing in PostgreSQL). You're comparing it for equality with the values in the IN clause, which are unknown-typed literals. The only valid equality operator for an array is = another array of the same type, so PostgreSQL figures that the values in the IN clause must also be arrays of text[] and tries to parse them as arrays.
Since they aren't written as array literals, like {"FirstValue","SecondValue"}, this parsing fails. Observe:
craig=> SELECT 'Arms'::text[];
ERROR: array value must start with "{" or dimension information
LINE 1: SELECT 'Arms'::text[];
^
See?
It's easier to understand this once you see that IN is actually just a shorthand for = ANY. It's an equality comparison to each element in the IN list. That isn't what you want if you really want to find out if two arrays overlap.
So that's why you want to use the array overlaps operator &&.

How can I use the COUNT value obtained from a call to mkqlite()?

I'm using mksqlite to create and access an SQL database from matlab, and I want to get the number of rows in a table. I've tried this:
num = mksqlite('SELECT COUNT(*) FROM myTable');
, but the returned value isn't very helpful. If I put a breakpoint in my script and examine the variable, I find that it's a struct with a single field, called 'COUNT(_)', which seems to actually be an invalid name for a field, so I can't access it:
K>> class(num)
ans =
struct
K>> num
num =
COUNT(_): 0
K>> num.COUNT(_)
??? num.COUNT(_)
|
Error: The input character is not valid in MATLAB statements or expressions.
K>> num.COUNT()
??? Reference to non-existent field 'COUNT'.
K>> num.COUNT
??? Reference to non-existent field 'COUNT'.
Even the MATLAB IDE can't access it. If I try to double click the field in the variable editor, this gets spat out:
??? openvar('num.COUNT(_)', num.COUNT(_));
|
Error: The input character is not valid in MATLAB statements or expressions.
So how can I access this field?
You are correct that the problem is that mksqlite somehow manages to create an invalid field name that can't be read. The simplest solution is to add an AS clause to your SQL so that the field has a sensible name:
>> num = mksqlite('SELECT COUNT(*) AS cnt FROM myTable')
num =
cnt: 0
Then to remove the extra layer of indirection you can do:
>> num = num.cnt;
>> num
num =
0