Qlik - Comma Seperated list to Array for Dimension - qlikview

I have a field that is stored as a string list of changing size. For Example,
7,5,6,4,2,3
I need to build an expression such that it creates an array which will become the dimension on the x axis for a bar chart
I have used
SubField(string_list, ',')
However, it produces only 1 value.

You have to use SubField in a LOAD statement, if you want to use it without the field_no parameter and generate a record for each value.
Otherwise you would need to specify the field_no explicitly.
Best regards,
Tom

Qlik supports the Javascript too.
So, may be this will help shortly:
function GetItemFromList(list, n, sep) {
//if (sep === undefined) { sep = ';';}
sep = typeof sep !== 'undefined' ? sep : ',';
return list.split(sep)[n-1];
}
And use it in Loading Script:
GetItemFromList('$(LongMonthNames)',12,';') //Dec

Related

How to expand this dynamic column with only 1 value

This Id column is dynamic type, but only holds 1 value (f3019...). I want to get rid of the array so it only has the field value.
When I try mv-expand Id it doesn't do anything
Also the current query is like:
Id = Target.Properties[0].value
When I try
Id = Target.Properties[0].value[0]
Id returns a blank
The dynamic types can hold arrays and dictionaries, but also scalar types.
The fact that Target.Properties[0].value does not behave like an array indicates that it is not an array, but a string.
The representation of it as an array in the GUI relates to the serving lair and not the way it is actually stored.
Use tostring(parse_json(tostring(Target.Properties[0].value))[0]).
Every element within a dynamic field is also of dynamic type.
When running on a dynamic element, parse_json() returns the element As Is.
If we want the element to get parsed, we first need to convert it to string, using tostring().
parse_json() which is used to parse the string, returns an array (which is a dynamic element).
The first (and only) element of the array is also of a dynamic type.
We use an additional tostring() to convert it to string.
Demo
print value = dynamic('["Hello"]')
| extend value[0] // Null because it's not really an array
| extend result = tostring(parse_json(tostring(value))[0])
value
value_0
result
["Hello"]
Hello
Fiddle
Misleading representation in Azure Monitor:

What is the difference between ', ` and |, and when should they be used?

I've seen strings written like in these three ways:
lv_str = 'test'
lv_str2 = `test`
lv_str3 = |test|
The only thing I've notice so far is that ' trims whitespaces sometimes, while ` preserves them.
I just recently found | - don't know much about it yet.
Can someone explain, or post a good link here when which of these ways is used best and if there are even more ways?
|...| denotes ABAP string templates.
With string templates we can create a character string using texts, embedded expressions and control characters.
ABAP Docu
Examples
Use ' to define character-typed literals and non-integer numbers:
CONSTANTS some_chars TYPE char30 VALUE 'ABC'.
CONSTANTS some_number TYPE fltp VALUE '0.78'.
Use ` to define string-typed literals:
CONSTANTS some_constant TYPE string VALUE `ABC`.
Use | to assemble text:
DATA(message) = |Received HTTP code { status_code } with message { text }|.
This is an exhaustive list of the ways ABAP lets you define character sequences.
To answer the "when should they be used" part of the question:
` and | are useful if trailing spaces are needed (they are ignored with ', cf this blog post for more information, be careful SCN renders today the quotes badly so the post is confusing) :
DATA(arrival) = `Hello ` && `world`.
DATA(departure) = |Good | && |bye|.
Use string templates (|) rather than the combination of ` and && for an easier reading (it remains very subjective, I tend to prefer |; with my keyboard, | is easier to obtain too) :
DATA(arrival) = `Dear ` && mother_name && `, thank you!`.
DATA(departure) = |Bye { mother_name }, thank you!|.
Sometimes you don't have the choice: if a String data object is expected at a given position then you must use ` or |. There are many other cases.
In all other cases, I prefer to use ' (probably because I obtain it even more easily with my keyboard than |).
Although the other answers are helpful they do not mention the most important difference between 'and `.
A character chain defined with a single quote will be defined as type C with exactly the length of the chain even including white spaces at the beginning and the end of the character sequence.
So this one 'TEST' will get exactly the type C LENGTH 4.
wherever such a construct `TEST` will evaluate always to type string.
This is very important for example in such a case.
REPORT zutest3.
DATA i TYPE i VALUE 2.
DATA(l_test1) = COND #( WHEN i = 1 THEN 'ACT3' ELSE 'ACTA4').
DATA(l_test2) = COND #( WHEN i = 1 THEN `ACT3` ELSE `ACTA4`).
WRITE l_test1.
WRITE l_test2.

DataWeave and Case Sensitivity

Can I turn off case sensitivity in DataWeave?
Two different requests are returning responses where the first contains a node called CDATA while the other contains a node called CData. In DataWeave is there a way to treat these as equal or do I need to have separate code statements such as payload.Data.CDATA and payload.Data.CData? If things were case insensitive I could have a single statement such as payload.data.cdata.
Thanks in advance,
Terry
It appears that I need two different statements.
payload.Data.*CDATA map $.#SeqId when payload.Data? and payload.Data.CDATA? and payload.Data.CDATA.#SeqId?
payload.Data.*CData map $.#SeqId when payload.Data? and payload.Data.CData? and payload.Data.CData.#SeqId?
No, but you can create a function like the following to select ignoring case.
Which filters an object by a given key (mapObject comparing keys using lower) and then gets the values from the resulting object (with pluck).
%function selectIgnoreCase(obj, keyName)
obj mapObject ((v, k) -> k match {
x when (lower x) == keyName -> {(k): v},
default -> {}
}) pluck $
And you'd use it like this:
selectIgnoreCase(payload.Data, "cdata")
Note: With Mule 4 (and DW 2) syntax for this would be a little bit better.

Pig Nesting STRSPLIT

I have a string in field 'product' in the following form:
";TT_RAV;44;22;"
and am wanting to first split on the ';' and then split on the '_' so that what is returned is
"RAV"
I know that I can do something like this:
parse_1 = foreach {
splitup = STRSPLIT(product,';',3);
generate splitup.$1 as depiction;
};
This will return the string 'TT_RAV' and then I can do another split and project out the 'RAV' however this seems like it will be passing the data through multiple Map jobs -- Is it possible to parse out the desired field in one pass?
This example does NOT work, as the inner splitstring retuns tuples, but shows logic:
c parse_1 = foreach {
splitup = STRSPLIT(STRSPLIT(product,';',3),'_',1);
generate splitup.$1 as depiction;
};
Is it possible to do this in pure piglatin without multiple map phases?
Don't use STRSPLIT. You are looking for REGEX_EXTRACT:
REGEX_EXTRACT(product, '_([^;]*);', 1) AS depiction
If it's important to be able to precisely pick out the second semicolon-delimited field and then the second underscore-delimited subfield, you can make your regex more complicated:
REGEX_EXTRACT(product, '^[^;]*;[^_;]*_([^_;]*)', 1) AS depiction
Here's a breakdown of how that regex works:
^ // Start at the beginning
[^;]* // Match as many non-semicolons as possible, if any (first field)
; // Match the semicolon; now we'll start the second field
[^_;]* // Match any characters in the first subfield
_ // Match the underscore; now we'll start the second subfield (what we want)
( // Start capturing!
[^_;]* // Match any characters in the second subfield
) // End capturing
The only time there will be multiple maps is if you have an operator that triggers a reduce (JOIN, GROUP, etc...). If you run an explain on the script you can see if there is more than one reduce phase.

apache pig group by output -- remove "(" and "{"

I do the following:
a = load '/hive/warehouse/' USING PigStorage('^') as (a1,b1,c1);
b = group a by (a1) ;
c = foreach b generate group, a.$2;
dump c;
Output shows all the groups:
abc {(1),(44),(66)}
cde {(1),(44),(66)}
How can I remove "{" and "(" characters so that the final HDFS file can be read as a coma delimited file?
You can't do this directly in Pig. The special syntax is required because you are storing a bag, and in order for Pig to be able to read this bag later, it needs to be stored with braces (for the bag) and parentheses (for the tuples contained in the bag).
You have a couple of options. You can read the file back into Pig, but instead of reading it as a bag, read it as a chararray. Then you can perform regex substitution to get rid of the punctuation (untested):
a = LOAD 'output' AS (group:chararray, list:chararray);
b = FOREACH A GENERATE group, REPLACE(list, '[{()}]', '');
Another option is to write a UDF which will turn a bag into a tuple. Note that this is not a well-defined operation: bags have no particular order, so from one run to the next, your tuple is not guaranteed to be in the same order. But for your purposes it sounds like that may not matter. The UDF could look like (very rough draft, untested):
public class BAG_TO_TUPLE extends EvalFunc(Tuple) {
public Tuple exec(Tuple input) {
DataBag bag = input.get(0);
Iterator<Tuple> iterator = bag.iterator();
Tuple out = new DefaultTuple();
while(iterator.hasNext()) {
out.append(iterator.next().get(0));
}
return out;
}
}
The above UDF is terrible -- it assumes that you have exactly one element in every tuple of the bag (that you care about) and does no checking whatsoever that the input is valid, etc. But it should get you towards what you want.
The best solution, though, is to find a way to handle the extra punctuation outside of Pig if Pig is not part of your downstream processing.
This functionality is now provided in Pig as a built-in func (I'm using 0.11).
http://pig.apache.org/docs/r0.11.0/api/org/apache/pig/builtin/BagToString.html
c = foreach b generate group, a.$2 as stuff;
d = foreach c generate group, BagToString(stuff, ',');
I don't need a comma-delimited file for my use case, but I assume you can use a store func to get the final comma (between group and the now-comma-delimited-list of bag things).
Try the FLATTEN operator;
c = foreach b generate group, FLATTEN(a.$2);