I'm might be doing this all wrong, as what I am doing seems quite simple but I'm having issues.
I want a Custom Metric of Logins to an Application. In the Graph I want to see "Number of logins" and also "Number of unique logins".
I have created a Custom Metric called Login, with a Dimension of UserId (name) and UUID (value).
I can run a query over that Metric using the Count Aggregation, to view "Number of Logins". However there doesn't seem to be a Distinct keyword (like there is in CloudWatch Log Insights), to view "Number of unique logins".
The other issue is that it seems like you can only add 1 query to your Graph ? Which seems quite a limitation ?
Related
Given my tabular model, I'm attempting to write a measure that changes behavior, depending upon which role the effective user belongs to. This isn't traditional row-level security (RLS) since I'm not trying to filter by role; just do an if-else, instead.
I've come across the following solution at https://community.powerbi.com/t5/Desktop/DAX-Expression-For-Role-Level-Security-Using-DirectQuery/td-p/489699, which I believe will work, but I'd prefer querying active directory to see if the user belongs to said role, rather than another table on the model.
I've also seen some articles (i.e. https://community.powerbi.com/t5/Desktop/How-to-leverage-Active-Directory-to-filter-the-data-in-Power-BI/td-p/140479) about getting attributes from active directory for Power BI, but nothing that exposes the DAX being used.
Bottom line, if I could get a role name in DAX or call a function to check if the user is in a role, I'd be golden (assuming performance isn't compromised).
Edit: I should add that I'm currently leveraging one of three functions to get the user. USERNAME(), USEROBJECTID(), and USERPRINCIPALNAME().
I ended up giving up on leveraging active directory and did what it appears everyone else is doing (as seen in one of the links posted in my question).
For reference, here's a snippet that illustrates the solution:
EVALUATE
SUMMARIZECOLUMNS (
"Some Measure",
IF (
LOOKUPVALUE('My User'[UsePrivilegedValue], [Name], USERNAME()),
SUM('Some Fact'[PrivilegedValue]),
SUM('Some Fact'[OtherValue])
)
)
Just a bit of background info:
I have dimension table which uses SCD2 to track user changes in our company (team changes, job title changes etc) See example below:
I've built an Analysis Services Cube and created all the necessary hierarchy's for the dimensions and it works well when navigating and drilling down through the fact table.
The problem I have is with the filters on the PerformancePoint dashboard. As I'm using the User Dimension table with it's multiple instances of users it's showing duplicates up in the list. I can understand why as the surrogate ID is being referenced on the Dimension. But if I choose the first instance of the A-team I will see all their sales for a particular period and if I choose the second instance I will see all their sales for a different period.
What is the best way to handle this type of behavior? Ideally I'd like to see a distinct list of teams in alphabetical order and when I choose the team name it shows all of their data over time.
I've considered using MDX query filters but I'd like to see if there's anything I haven't thought about.
I realise this isn't an easy and quick question but any help would be appreciated!
The answer was simple after having a trawl through my User Dimension table on the Cube.
Under my user dimension I added 2 duplicate attributes to my attributes list ("Team Filter" is a copy of "Team", "User Filter" a copy of "User Name") these will be used only for filtering the dashboard.
Under the attribute properties for each duplicate I then set AttributeHierarchyOptimizedState to "Not Optimized", I also set their AttributeHierarchyVisible to false as I'd shown the two duplicate attributes in the hierarchy window in the middle.
Deploy your Cube to the server and go in to PerformancePoint. Create a new MDX Filter (this image shows the finished filter)
This is the code I used, it only shows dimension members which have a fact against them (reduces the list a considerable amount) and by using allmembers at the dimension it also gives me the option to show "All" at the top of the list.
Deploy the new filters and now you can see the distinct list of users and teams, works perfectly and selects every instance (regardless of the SCD2 row)
I have problem with one query.
Let me explain what I want:
For the sake of bravity let's say that I have three tables:
-Offers
-Ratings
-Users
Now what I want to do is to create SQL query:
I want Offers to be listed with all its fields and additional temporary column that IS NOT storred anywhere called AverageUserScore.
This AverageUserScore is product of grabbing all offers, belonging to particular user and then grabbing all ratings belonging to these offers and then evaluating those ratings average - this average score is AverageUserScore.
To explain it even further, I need this query for Ruby on Rails application. In the browser inside application you can see all offers of other users , with AverageUserScore at the very end, as the last column.
Associations:
Offer has many ratings
Offer belongs to user
Rating belongs to offer
User has many offers
Assumptions made:
You actually have a numeric column (of any type that SQL's AVG is fine with) in your Rating model. I'm using a column ratings.rating in my examples.
AverageUserScore is unconventional, so average_user_score is better.
You don't mind not getting users that have no offers: average rating is not clearly defined for them anyway.
You don't deviate from Rails' conventions far enough to have a primary key other than id.
Displaying offers for each user is a straightforward task: in a loop of #users.each do |user|, you can do user.offers.each do |offer| and be set. The only problem here is that it will execute a separate query for every user. Not good.
The "fetching offers" part is a standard N+1 counter seen even in the guides.
#users = User.includes(:offers).all
The interesting part here is only getting the averages.
For that I'm going to use Arel. It's already part of Rails, ActiveRecord is built on top of it, so you don't need to install anything extra.
You should be able to do a join like this:
User.joins(offers: :ratings)
And this won't get you anything interesting (apart from filtering users that have no offers). Inside though, you'll get a huge set of every rating joined with its corresponding offer and that offer's user. Since we're taking averages per-user we need to group by users.id, effectively making one entry per one users.id value. That is, one per user. A list of users, yes!
Let's stop for a second and make some assignments to make Arel-related code prettier. In fact, we only need two:
users = User.arel_table
ratings = Rating.arel_table
Okay. So. We need to get a list of users (all fields), and for each user fetch an average value seen on his offers' ratings' rating field. So let's compose these SQL expressions:
# users.*
user_fields = users[Arel.star] # Arel.star is a portable SQL "wildcard"
# AVG(ratings.rating) AS average_user_score
average_user_score = ratings[:rating].average.as('average_user_score')
All set. Ready for the final query:
User.includes(:offers) # N+1 counteraction
.joins(offers: :ratings) # dat join
.select(user_fields, average_user_score) # fields we need
.group(users[:id]) # grouping to only get one row per user
I want to be able to combine the functionality of the Kibana Terms Graph (be able to create buckets based on uniqueness of values from a particular attribute) and Histogram Graph (separate data into buckets based on queries and then illustrate the date based on time).
Overall, I want to create a Histogram, but I only want to create the Histogram based on the results of one query, not multiple queries like it's being done in the Kibana demo app. Instead, I want each bucket to be dynamically created per unique value of my particular field. For example, consider the following data returned by my query:
{"myValueType": "New York"}
{"myValueType": "New York"}
{"myValueType": "New York"}
{"myValueType": "San Francisco"}
{"myValueType": "San Francisco"}
Also assume that each record has a timestamp field for separating histogram data by date. For that particular date, I want the data to be communicated as a count of 3 into the New York bucket and a count of 2 into the San Francisco bucket. However, I am only able to show a count of 5 for my one linked query. When I configure the Histogram, I am able to specify a field to use for my timestamp, but not to create buckets from. I could've sent a field to compute a total/min/max/mean, but this field would've had to be numeric, so that is not the solution either.
If I were to use a Term Graph to create a pie or bar graph, I am indeed able to separate my data into buckets based on the unique values of my specified field (in this case, "myValueType"), but this would total up the data for all-time, not split up the data by timestamp. Although this is good information to know, it is not ideal because I wouldn't be able to detect trends in my data.
I am looking for a solution that will do one of the following:
Let me dynamically create queries in my Kibana dash board to create "buckets" in a Histogram
Allow me to run an ElasticSearch Terms Aggregation to supposidly split up my data into buckets based on "myValueType" and integrate these results into my Histogram
Customize the JSON of my dashboard, but this doesn't look possible to me
Create my own custom panel, but this is not desirable
Link a Kibana "TopN" query in Kibana. Actually, this has proven to be a work-around for my problem because the TopN query dynamically created one query per unique value/term from the specified fieldName. However, the problem is that I can only link one colour to this TopN query and each unique term will be placed in a bucket that uses a different shade of the colour. Ideally, every bucket in my Histogram will have a completely different colour associated to it. Imagine how difficult it will be to distinguish unique terms as the number of buckets grows.
If all else fails, I make one query per unique value from my search field. This will allow me to have one unique colour per bucket, but as the number of unique terms in the "myValueType" field changes, I need to keep adding/removing queries from Kibana, which can get quite messy.
I'm sure there is someting that I am missing here. Please help me out. Many thanks.
A highly related SOF question: Is it Possible to Use Histogram Facet or Its Curl Response in Kibana
This would be a great feature. It looks like it will be supported in Kibana4, but there doesn't seem to be much more info out there than that.
For reference: https://github.com/elasticsearch/kibana/issues/1249
Maybe a little late but it is actually possible in the newest BETA release.
kibana 4 beta 3 installation download
I would like to join the user object and project permission object to see how many users have been assigned to a project, for audit purpose. I don't see a common field with common values (email address or first name/last name) between these objects. I used Excel plugin to retrieve two separate data sheet and unable to map them. Any thoughts on this on how to do this?
You're probably seeing something similar to the following when you query on ProjectPermissions:
In this situation, the default User object selected from the "Columns" picker in the query dialog, gives you the User's DisplayName, which doesn't unambiguously map to a Rally UserID.
Note, however, that you can add dot-notation sub-fields of Objects manually by typing them into the Columns field. In the following example, I've included User.Username and User.LastLoginDate as additional fields I want to show on the Permissions report:
Of course, you could also just include User.Username, and run a second query on the User object with all fields selected, and do a join in Excel.
One note of caution - if you have many users (say 1,000), and a lot of projects, (say 1,000, which is not uncommon in large Rally subscriptions), querying directly against the ProjectPermissions endpoint can rapidly result in total results that number on the order of 10^6. This will probably time out in an Excel query.
The Rally User Management: User Permissions Summary script works around this by querying Permissions in a loop on a user-by-user basis. It's slow, but it returns results without timeouts. Certainly not as convenient as Excel either - you need to install Ruby 1.9.2+ and the rally_api gem to get it working.