QlikView: Disable Element Reorganization in List Box - qlikview

I have a filed 'Day_NAM' (contains the names of the days in the week) and a field 'Day_Order' (numeric field, gives each day a number (monday=1, Tuesday=2 etc) so I can sort the days).
I managed to sort the days in the list box the way i wanted. However, when I make selection on the list box, the days change their positions in the list box. What it has to be done so that the elements in the list box do not change position when a user makes selections?
Link to the QVW file: FILE-LINK

In the Sort tab, if you have State = "Auto Ascending" try to uncheck that and see if it helps.
//Micke

The reason it isn't working is because in initiation section (the variable set statements at the beginning of your script) you don't define the day names as the day names that your data has
SET DayNames='Mo;Di;Mi;Do;Fr;Sa;So';
SET LongDayNames='Montag;Dienstag;Mittwoch;Donnerstag;Freitag;Samstag;Sonntag';
To make that sorting work you need to have those values match. That said that method isn't 100% guaranteed to work so you can force it to work (or if you don't want to change those defaults because it looks like that is for your regional settings) by using the dual() function which assigns a number to the values in a field but displays the text. Essentially combining the 2 steps you wanted to do.
Your Capture Day will then look like this:
dual(CaptureDay,
If(CaptureDay = 'Monday', 1,
If(CaptureDay = 'Tuesday', 2,
If(CaptureDay = 'Wednesday', 3,
If(CaptureDay = 'Thursday', 4,
If(CaptureDay = 'Friday', 5,
If(CaptureDay = 'Saturday', 6,
If(CaptureDay = 'Sunday', 7, 8)
)
)
)
)
)
)) as CaptureDay,
Then your sort settings should look like this:

Related

Can I parse string to SQL code

I am trying to create a materialized view that requires slightly different filters between prod, dev, qa.
We have a variables table that stores random ids and I'm trying to find a way to store something like this in my variables table:
prod_filter_values = "(D.DEFID = 123 AND D.ATTRID IN (2, 3, 4)) OR
(D.DEFID = 3112 AND D.ATTRID IN (3, 30, 34, 23, 4)) OR
(D.DEFID = 379 AND D.ATTRID IN (3, 5, 8)) OR
(D.DEFID = 3076 AND D.ATTRID = 5);"
Then I'd do something like select * from variables_table where EVAL(prod_filter_values)
Is it possible?
Yes you can as other answers have explained. However a better way would be to have this data driven - simply create tables in your various environments that have the corresponding magic numbers and join to that as required.
A second way is to have different views for the different environments with the numbers hard-coded there.
Anything that avoids building strings is going to be better for several reasons including having code in one place, stable code, no security/injection problems, no parse overhead.
Yes. Lookup dynamic SQL:
https://docs.oracle.com/cloud/latest/db112/LNPLS/dynamic.htm#LNPLS01102
something like this:
EXECUTE IMMEDIATE 'select * from vars_table where ' || prod_filter_values;

SQL server query on json string for stats

I have this SQL Server database that holds contest participations. In the Participation table, I have various fields and a special one called ParticipationDetails. It's a varchar(MAX). This field is used to throw in all contest specific data in json format. Example rows:
Id,ParticipationDetails
1,"{'Phone evening': '6546546541', 'Store': 'StoreABC', 'Math': '2', 'Age': '01/01/1951'}"
2,"{'Phone evening': '6546546542', 'Store': 'StoreABC', 'Math': '2', 'Age': '01/01/1952'}"
3,"{'Phone evening': '6546546543', 'Store': 'StoreXYZ', 'Math': '2', 'Age': '01/01/1953'}"
4,"{'Phone evening': '6546546544', 'Store': 'StoreABC', 'Math': '3', 'Age': '01/01/1954'}"
I'm trying to get a a query runing, that will yield this result:
Store, Count
StoreABC, 3
StoreXYZ, 1
I used to run this query:
SELECT TOP (20) ParticipationDetails, COUNT(*) Count FROM Participation GROUP BY ParticipationDetails ORDER BY Count DESC
This works as long as I want unique ParticipationDetails. How can I change this to "sub-query" into my json strings. I've gotten to this query, but I'm kind of stuck here:
SELECT 'StoreABC' Store, Count(*) Count FROM Participation WHERE ParticipationDetails LIKE '%StoreABC%'
This query gets me the results I want for a specific store, but I want the store value to be "anything that was put in there".
Thanks for the help!
first of all, I suggest to avoid any json management with t-sql, since is not natively supported. If you have an application layer, let it to manage those kind of formatted data (i.e. .net framework and non MS frameworks have json serializers available).
However, you can convert your json strings using the function described in this link.
You can also write your own query which works with strings. Something like the following one:
SELECT
T.Store,
COUNT(*) AS [Count]
FROM
(
SELECT
STUFF(
STUFF(ParticipationDetails, 1, CHARINDEX('"Store"', ParticipationDetails) + 9, ''),
CHARINDEX('"Math"',
STUFF(ParticipationDetails, 1, CHARINDEX('"Store"', ParticipationDetails) + 9, '')) - 3, LEN(STUFF(ParticipationDetails, 1, CHARINDEX('"Store"', ParticipationDetails) + 9, '')), '')
AS Store
FROM
Participation
) AS T
GROUP BY
T.Store

Convert "YYYYMMDD" format string to date in MDX?

I have some issue with applying date related functions on the "YYYYMMDD" format string in MDX. For example, if I have this query below:
with
member foo as WEEKDay("2013-03-21")
select
foo on 0
from
[Some Cube]
It will correctly output "5" for foo in SSMS. But if I change the second line to:
member foo as WEEKDay("20130321")
Unfortunately, it will throw "type mismatch" error.
So what I want to do is that converting the string to some recognizable date format and then applying the functions on it. Any ideas for the easiest way, e.g. using existing functions?
Please note that the string is actually inputted from members in any cube where the MDX is running on. So the string format could have been recognizable, e.g. "YYYY-MM-DD". So hard coded string converting algorithm may not be ok.
Topic is too old, but maybe this may help someone. Technique is quite brute, but scalable.
with
member foo_false as WeekDay("20130321")
member foo_true as WeekDay("2013-03-21")
member foo_brute as
case when IsError(WeekDay("20130321"))=False then WeekDay("20130321") else
case
/* YYYYMMDD */
when
IsError(WeekDay("20130321"))=True AND IsNumeric("20130321")=True
and IsError(WeekDay(Left("20130321",4)+'-'+Right(Left("20130321",6),2)+'-'+Right("20130321",2)))=False
then WeekDay(Left("20130321",4)+'-'+Right(Left("20130321",6),2)+'-'+Right("20130321",2))
/* DDMMYYYY */
when
IsError(WeekDay("20130321"))=True AND IsNumeric("20130321")=True
and IsError(WeekDay(Right("20130321",4)+'-'+Right(Left("20130321",4),2)+'-'+Left("20130321",2)))=False
then WeekDay(Right("20130321",4)+'-'+Right(Left("20130321",4),2)+'-'+Left("20130321",2))
/* MMDDYYYY */
when
IsError(WeekDay("20130321"))=True AND IsNumeric("20130321")=True
and IsError(WeekDay(Right("20130321",4)+'-'+Left("20130321",2)+'-'+Right(Left("20130321",4),2)))=False
then WeekDay(Right("20130321",4)+'-'+Left("20130321",2)+'-'+Right(Left("20130321",4),2))
/* Unsupported Message */
else "Unsupported Format" end
end
select
{foo_false,foo_true,foo_brute} on 0
from
[DATA Cube]
Adding supportable formats to the end before "Unsupported", use any input string instead of "20130321".
But the easiest way is to use another layer (e.g. SQL function CONVERT) before inserting to MDX if possible, sure thing.
The vba function isDate can be used to check if the passed date is well formatted. If not then format it first using dateserial and mid and use them.
with
member foo as "20130321"
member bar as
iif(vba!isdate(foo) = TRUE,
WEEKDay(foo), //--if the date is already well formatted, it can be used
WEEKday(vba!dateserial(vba!mid(foo, 0, 4), vba!mid(foo, 5, 2), vba!right(foo, 2))))
select
bar on 0
from
[some cube]
EDIT
The above code can be modified to accommodate other checks such as MMDDYYYY or DDMMYYYY, but the thing is it is impossible in lot of cases for the engine to intuitively know if the passed value is in YYYYMMDDDD or DDMMYYYY or MMDDYYYY. Take for example the string 1111111
This could easily be in any date format as it is a valid date howsoever you break it.
I would suggest that you have another member which can store the date format as well. That way looking at this member, string can be parsed.
e.g.
with
member foo as
// "20130321" //YYYYMMDD
// "03212013"//MMDDYYYY
"21032013"//DDMMYYYY
MEMBER dateformat as "ddmmyyyy"
member bar as
iif(vba!isdate(foo) = TRUE,
WEEKDay(foo),
IIF(dateformat = "yyyymmdd", //YYYYMMDD
WEEKday(vba!dateserial(vba!mid(foo, 0, 4), vba!mid(foo, 5, 2), vba!right(foo, 2))),
IIF(dateformat = "mmddyyyy", //MMDDYYYY
WEEKday(vba!dateserial(right(foo, 4), vba!mid(foo, 0, 2), vba!mid(foo, 3, 2))),
IIF(dateformat = "ddmmyyyy", //DDMMYYYY
WEEKday(vba!dateserial(right(foo, 4), vba!mid(foo, 3, 2), vba!mid(foo, 0, 2))),
null
)
)
)
)
select
bar on 0
from
[aw cube]
With FooMember being an int here, representing a yyyyMMdd date I could convert it to Date using the following code
=Format(CDate(Left(CSTR(Fields!FooMember.Value),4)
+ "-" + Mid(CSTR(Fields!FooMember.Value), 5, 2)
+ "-" + Right(CSTR(Fields!FooMember.Value), 2)),
"dd/MM/yyyy")
use it in a cube using the following code
Format(CDate(Left([Measures].[FooMember],4)
+ "-" + Mid([Measures].[FooMember], 5, 2)
+ "-" + Right([Measures].[FooMember], 2)),"yyyy/MM/dd")

AVG is not taking null values into consideration

I have loaded the following test data:
name, age,gender
"John", 33,m
"Sam", 33,m
"Julie",33,f
"Jimbo",, m
with schema: name:STRING,age:INTEGER,gender:STRING and I have confirmed that the Jimbo row shows a null for column "age" in the BigQuery Browser Tool > mydataset > Details > Preview section.
When I run this query :
SELECT AVG(age) FROM [peterprivatedata.testpeople]
I get 24.75 which is incorrect. I expected 33 because the documentation for AVG says "Rows with a NULL value are not included in the calculation."
Am I doing something wrong or is this a known bug? (I don't know if there's a public issues list to check). What's the simplest workaround to this?
This is a known bug where we coerce null numeric values to to 0 on import. We're currently working on a fix. These values do however, show up as not not defined (which for various reasons is different from null), so you can check for IS_EXPLICITLY_DEFINED. For example:
SELECT sum(if(is_explicitly_defined(numeric_field), numeric_field, 0)) /
sum(if(is_explicitly_defined(numeric_field), 1, 0))
AS my_avg FROM your_table
Alternately, you could use another column to represent is_null. Then the query would look like:
SELECT sum(if(numeric_field_is_null, 0, numeric_field)) /
sum(if(numeric_field_is_null, 0, 1))
AS my_avg FROM your_table

django order by date in datetime / extract date from datetime

I have a model with a datetime field and I want to show the most viewed entries for the day today.
I thought I might try something like dt_published__date to extract the date from the datetime field but obviously it didn't work.
popular = Entry.objects.filter(type='A', is_public=True).order_by('-dt_published__date', '-views', '-dt_written', 'headline')[0:5]
How can I do this?
AFAIK the __date syntax is not supported yet by Django. There is a ticket open for this.
If your database has a function to extract date part then you can do this:
popular = Entry.objects.filter(**conditions).extra(select =
{'custom_dt': 'to_date(dt_published)'}).order_by('-custom_dt')
In the new Django, it should work out of the box [tested on 3.2] with Mysql 5.7
Dataset
[
{ "id": 82148, "paid_date": "2019-09-30 20:51:11"},
{ "id": 82315, "paid_date": "2019-09-30 00:00:00"},
]
Query
Payment.objects.filter(order_id=135342).order_by('paid_date__date', 'id').values_list('id', 'paid_date__date')
Results
`<QuerySet [(82148, datetime.date(2019, 9, 30)), (82315, datetime.date(2019, 9, 30))]>`