Qlikview: Showing 0 Bars but maintaining order - qlikview

I have a bar chart with 6 categories, e.g. Prohibited, Restricted, High, Very High, Moderate, Low. In my load script I created a order column for the sorting of the bars in the chart which goes something like:
if(COL = 'prohibited', 1,
if(COL = 'restricted', 2,
if(COL = 'very high', 3,
if(COL = 'high', 4,
if(COL = 'moderate', 5,
if(COL = 'low', 6, 1)))))) AS COL_SORT
I also have the "Show All Values Option" checked which shows the categories with 0 counts. What I find is that sorting only partially works in that if I have volume in each category the order works but if there are 0 bars there are no guarantees the sort works.
My question is: Is there a way to guarantee the order of the bars no matter whether is the bar count is 0 or more? The company I work in won't let me upload sample data so I understand if that annoys people trying to help.
Kind Regards
Edit: I'd like to keep the order as in the if/else statement.

in chart "properties -> sort"
use "expression" as the sort order and use this expression:
max({1} COL_SORT)

The dual() function can do what you want. Dual assigns a numeric value to each element of a dimension. QlikView uses it internally to sort things like Months and day names. Important note, after assigning the order in the script you need to set the sort order in your chart to Numeric.
dual(COL,
if(COL = 'prohibited', 1,
if(COL = 'restricted', 2,
if(COL = 'very high', 3,
if(COL = 'high', 4,
if(COL = 'moderate', 5,
if(COL = 'low', 6, 1))))))) AS COL

Related

Rearranging numpy arrays

I was not able to find a duplicate of my question, unfortunately, although I am sure that this is a problem which has been solved before
I have a numpy array with a certain set of indices, eg.
ind1 = np.array([1, 3, 5, 7])
With these indices, I can filter some values from another array. Lets call this other array rows. As an example, I can retrieve
rows[ind1] = [1, 10, 20, 15]
The order of rows[ind1] must not be changed in the following.
I have another index array, ind2
ind2 = np.array([4, 5, 6, 7])
I also have an array cols, where I can filter values from using ind2. I know that cols[ind2] results in an array which has the size of rows[ind1] and the entries are the same, but the order is different. An example:
cols[ind2] = [15, 20, 10, 1]
I would like to rearrange the order of cols[ind2], so that it corresponds to rows[ind1]. I am interested in the corresponding order of ind2.
In the example, the result should be
cols[ind2] = [1, 10, 20, 15]
ind2 = [7, 6, 5, 4]
Using numpy, I did not find a way to do this. Any ideas would be helpful. Thanks in advance.
There may be a better way, but you can do this using argsorts.
Let's call your "reordered ind2" ind3.
If you are sure that rows[ind1] and cols[ind2] will have the same length and all of the same elements, then the sorted versions of both will be the same i.e np.sort(rows[ind1]) = np.sort(cols[ind2]).
If this is the case, and you don't run into any problems with repeated elements (unsure of your exact use case), then what you can do is find the indices to put cols[ind2] in order, and then from there, find the indices to put np.sort(cols[ind2]) into the order of rows[ind1].
So, if
p1 = np.argsort(rows[ind1])
and
p2 = np.argsort(cols[ind2])
and
p3 = np.argsort(p1)
Then
ind3 = ind2[p2][p3]. The reason this works is because if you do an argsort of an argsort, it gives you the indices you need to reverse the first sort. p2 sorts cols[ind2] (that's the definition of argsort), and p3 unsorts the result of that back into the order of rows[ind1].

Removing selected features from dataset

I am following this program: https://scikit-learn.org/dev/auto_examples/inspection/plot_permutation_importance_multicollinear.html
since I have a problem with highly correlated features in my model (different from that one shown in the example). In this step
selected_features = [v[0] for v in cluster_id_to_feature_ids.values()]
I can get information on the features that I will need to remove from my classifier. They are given as numbers ([0, 3, 5, 6, 8, 9, 10, 17]). How can I get names of these features?
Ok, there are two different elements to this problem I think.
First, you need to get a list of the column names. In the example code you linked, it looks like the list of feature names is stored like this:
data.feature_names
Once you have the feature names, you'd need a way to loop through them and grab only the ones you want. Something like this should work:
columns = ['a', 'b', 'c', 'd']
keep_index = [0, 3]
new_columns = [columns[i] for i in keep_index]
new_columns
['a', 'b']

How to plot outliers with regard to unique ids

I have item_code column in my data and another column, sales, which represents sales quantity for the particular item.
The data can have a particular item id many times. There are other columns tell apart these entries.
I want to plot only the outlier sales for each item (because data has thousands of different item ids, plotting every entry can be difficult).
Since I'm very new to this, what is the right way and tool to do this?
you can use pandas. You should choose a method to detect outliers, but I have an example for you:
If you want to get outliers for all sales (not in groups), you can use apply with function (example - lambda function) to have outliers indexes.
import numpy as np
%matplotlib inline
df = pd.DataFrame({'item_id': [1, 1, 2, 1, 2, 1, 2],
'sales': [0, 2, 30, 3, 30, 30, 55]})
df[df.apply(lambda x: np.abs(x.sales - df.sales.mean()) / df.sales.std() > 1, 1)
].set_index('item_id').plot(style='.', color='red')
In this example we generated data sample and search indexes of points what are more then mean / std + 1 (you can try another method). And then just plot them where y is count of sales and x is item id. This method detected points 0 and 55. If you want search outliers in groups, you can group data before.
df.groupby('item_id').apply(lambda data: data.loc[
data.apply(lambda x: np.abs(x.sales - data.sales.mean()) / data.sales.std() > 1, 1)
]).set_index('item_id').plot(style='.', color='red')
In this example we have points 30 and 55, because 0 isn't outlier for group where item_id = 1, but 30 is.
Is it what you want to do? I hope it helps start with it.

Get indices for values of one array in another array

I have two 1D-arrays containing the same set of values, but in a different (random) order. I want to find the list of indices, which reorders one array according to the other one. For example, my 2 arrays are:
ref = numpy.array([5,3,1,2,3,4])
new = numpy.array([3,2,4,5,3,1])
and I want the list order for which new[order] == ref.
My current idea is:
def find(val):
return numpy.argmin(numpy.absolute(ref-val))
order = sorted(range(new.size), key=lambda x:find(new[x]))
However, this only works as long as no values are repeated. In my example 3 appears twice, and I get new[order] = [5 3 3 1 2 4]. The second 3 is placed directly after the first one, because my function val() does not track which 3 I am currently looking for.
So I could add something to deal with this, but I have a feeling there might be a better solution out there. Maybe in some library (NumPy or SciPy)?
Edit about the duplicate: This linked solution assumes that the arrays are ordered, or for the "unordered" solution, returns duplicate indices. I need each index to appear only once in order. Which one comes first however, is not important (neither possible based on the data provided).
What I get with sort_idx = A.argsort(); order = sort_idx[np.searchsorted(A,B,sorter = sort_idx)] is: [3, 0, 5, 1, 0, 2]. But what I am looking for is [3, 0, 5, 1, 4, 2].
Given ref, new which are shuffled versions of each other, we can get the unique indices that map ref to new using the sorted version of both arrays and the invertibility of np.argsort.
Start with:
i = np.argsort(ref)
j = np.argsort(new)
Now ref[i] and new[j] both give the sorted version of the arrays, which is the same for both. You can invert the first sort by doing:
k = np.argsort(i)
Now ref is just new[j][k], or new[j[k]]. Since all the operations are shuffles using unique indices, the final index j[k] is unique as well. j[k] can be computed in one step with
order = np.argsort(new)[np.argsort(np.argsort(ref))]
From your original example:
>>> ref = np.array([5, 3, 1, 2, 3, 4])
>>> new = np.array([3, 2, 4, 5, 3, 1])
>>> np.argsort(new)[np.argsort(np.argsort(ref))]
>>> order
array([3, 0, 5, 1, 4, 2])
>>> new[order] # Should give ref
array([5, 3, 1, 2, 3, 4])
This is probably not any faster than the more general solutions to the similar question on SO, but it does guarantee unique indices as you requested. A further optimization would be to to replace np.argsort(i) with something like the argsort_unique function in this answer. I would go one step further and just compute the inverse of the sort:
def inverse_argsort(a):
fwd = np.argsort(a)
inv = np.empty_like(fwd)
inv[fwd] = np.arange(fwd.size)
return inv
order = np.argsort(new)[inverse_argsort(ref)]

Slice pandas.DataFrame's second Multiindex

I have a pandas Dataframe of the form
"a" "b" "c" #first level index
0, 1, 2 0, 1, 2 0, 1, 2 #second level index
index
0 1,2,3 6,7,8 5,3,4
1 2,3,4 7,5,4 9,2,5
2 3,4,5 4,5,6 0,4,5
...
representing a spot (a,b or c) where a measurement took place and the results of the measurments (0,1,2) that took place on this spot.
I want to do the following:
pick a slice in the sample (say the first measurement on each spot at measurement 0)
mean each i-th measurement (mean("a"[0], "b"[0], "c"[0]), mean("a"[1], "b"[1], "c"[1]), ...)
I tried to get the hang of the pandas Multiindex documentation but do not manage to slice for the second level.
This is the column index:
MultiIndex(levels=[['a', 'b', 'c', ... , 'y'], [0, 1, 2, ... , 49]],
labels=[[0, 0, 0, ... , 0, 1, 1, 1, ... 1, ..., 49, 49, 49, ... 49]])
And the index
Float64Index([204.477752686, 204.484664917, 204.491577148, ..., 868.723022461], dtype='float64', name='wavelength', length=43274)
Using
df[:][0]
yields a key-error (0 not in index)
df.iloc[0]
returns the horizontal slice
0 "a":(1,2,3), "b":(6,7,8), "c":(5,3,4)
but I would like to have
"a":(1,2,3), "b":(6,7,4), "c":(5,9,0)
THX for any help
PS: version:pandas-0.19, python-3.4
The trick was to specify the axis...
df.loc(axis=1)[:,0]
provides the 0-th measurment of each spot.
Since I use integers on the second level index, I am not sure if this actually yields the label "0" or just the 0-th measurment in the DataFrame, label-agnostic.
But for my use-case, this is actually sufficient.