Producing a single row matrix using KQL - kql

This is a matrix from a job I have in Databricks. I have cropped out the task names (which usually sits on the left side of the matrix. The green dots show when a task ran successful. I want to recreate this using KQL, however instead of having all of the lines simply have one row where if all of the tasks run successfully on a given day the dot will appear green and if 1 or more fails the dot appears red. I have four columns of data. Time generated, ActionName (whether is passed or failed), table name, and jobId (also would be ideal to change the jobId to something easier to read 'job1'). I have records for each table for many months but only need a 2 week matrix. Many thanks if anyone can help.

// Generation of a data sample. Not part of the solution.
let t = materialize (print table_id = range(1,10), dt = range(startofday(ago(20d)), now(), 1d) | mv-expand table_id | mv-expand dt | extend TimeGenerated = todatetime(dt), TableName = strcat("table_",table_id), ActionName = dynamic(["Passed","Failed"])[iff(rand()<0.9,0,1)] | where rand() < 0.9);
// Solution starts here.
t
| where TimeGenerated >= startofday(ago(14d))
| extend TimeGenerated = format_datetime (TimeGenerated, 'yyyy-MM-dd')
| summarize result_symbol = iff(countif (ActionName == 'Failed') > 0, make_string(128997), make_string(129001)) by TimeGenerated
| evaluate pivot(TimeGenerated, any(result_symbol))
2022-04-14
2022-04-15
2022-04-16
2022-04-17
2022-04-18
2022-04-19
2022-04-20
2022-04-21
2022-04-22
2022-04-23
2022-04-24
2022-04-25
2022-04-26
2022-04-27
2022-04-28
πŸŸ₯
πŸŸ₯
🟩
🟩
🟩
πŸŸ₯
πŸŸ₯
πŸŸ₯
πŸŸ₯
πŸŸ₯
πŸŸ₯
πŸŸ₯
πŸŸ₯
🟩
πŸŸ₯
Fiddle

Related

Conditionally remove a field in Splunk

I have a table generated by chart that lists the results of a compliance scan
These results are typically Pass, Fail, and Error - but sometimes there is "Unknown" as a response
I want to show the percentage of each (Pass, Fail, Error, Unknown), so I do the following:
| fillnull value=0 Pass Fail Error Unknown
| eval _total=Pass+Fail+Error+Unknown
<calculate percentages for each field>
<append "%" to each value (Pass, Fail, Error, Unknown)>
What I want to do is eliminate a "totally" empty column, and only display it if it actually exists somewhere in the source data (not merely because of the fillnull command)
Is this possible?
I was thinking something like this, but cannot figure out the second step:
| eventstats max(Unknown) as _unk
| <if _unk is 0, drop the field>
edit
This could just as easily be reworded to:
if every entry for a given field is identical, remove it
Logically, this would look something like:
if(mvcount(values(fieldname))<2), fields - fieldname
Except, of course, that's not valid SPL
could you try that logic after the chart :
``` fill with null values ```
| fillnull value=null()
``` do 90Β° two time, droping empty/null ```
| transpose 0 include_empty=false | transpose 0 header_field=column | fields - column
[edit:] it is working when I do the following but not sure it is easy to make it working on all conditions
| stats count | eval keep=split("1 2 3 4 5"," ") | mvexpand keep
| table keep nokeep
| fillnull value=null()
| transpose 0 include_empty=false | transpose 0 header_field=column | fields - column
[edit2:] and if you need to add more null() could be done like that
| stats count | eval keep=split("1 2 3 4 5"," "), nokeep=0 | mvexpand keep
| table keep nokeep
| foreach nokeep [ eval nokeep=if(nokeep==0,null(),nokeep) ]
| transpose 0 include_empty=false | transpose 0 header_field=column | fields - column

How to eval a string containing column names?

As I cannot attach a conditional formatting on a Table, I need an abstract function to chech if a set of records or all records have errors inside, and show these errors into forms and/or reports.
Because, to achieve this goal in the 'standard' mode, I have to define the rule [β—‹for a field of a table every time I use that field in a control or report, and this means the need to repeate the same things an annoying lot of times, not to tell about introducing errors and resulting in a maintenance nightmare.
So, my idea is to define all the check for all the tables and their rows in an CheckError-table, like the following fragment related to the table 'Persone':
TableName | FieldName | TestNumber | TestCode | TestMessage | ErrorType[/B][/COLOR]
Persone | CAP | 4 | len([CAP]) = 0 or isnull([cap]) | CAP mancante | warning
Persone | Codice Fiscale | 1 | len([Codice Fiscale]) < 16 | Codice fiscale nullo o mancante | error
Persone | Data di nascita | 2 | (now() - [Data di nascita]) < 18 * 365 | Minorenne | info
Persone | mail | 5 | len([mail)] = 0 or isnull([mail] | email mancante | warning
Persone | mail | 6 | (len([mail)] = 0 or isnull([mail]) | richiesto l'invio dei referti via e- mail, | error
| | | and [modalitΓ  ritiro referti] = ""e-mail"" | ma l'indirizzo e-mail Γ¨ mancante |
Persone | Via | 3 | len([Via]) = 0 or isnull([Via]) | Indirizzo mancante | warning
Now, in each form or report which use the table Persona, I want to set an 'onload' property to a function
' to validate all fields in all rows and set the appropriate bg and fg color
Private Sub Form_Open(Cancel As Integer)
Call validazione.validazione(Form, "Persone", 0)
End Sub
' to validate all fields in the row identified by ID and set the appropriate bg and fg color
Private Sub Codice_Fiscale_LostFocus()
Call validazione.validazione(Form, "Persone", ID)
End Sub
So, the function validazione, at a certain point, as exactly one row for the table Persone, and the set of expressions described in the column [TestCode] above.
Now, I need to logically evaluate the TestString against the table row, to obtain a true or a false.
If true, I'll set the fg and bg color of the field as normal
if false, I'll set the the fg and bg color as per error, info or warning, as defined by the column [ErrorType] above.
All the above is easy, ready, and running, except for the red statement above:
How can I evaluate the teststring against the table row, to obtain a result?
Thank you
Paolo

How to write a select query or server-side function that will generate a neat time-flow graph from many data points?

NOTE: I am using a graph database (OrientDB to be specific). This gives me the freedom to write a server-side function in javascript or groovy rather than limit myself to SQL for this issue.*
NOTE 2: Since this is a graph database, the arrows below are simply describing the flow of data. I do not literally need the arrows to be returned in the query. The arrows represent relationships.*
I have data that is represented in a time-flow manner; i.e. EventC occurs after EventB which occurs after EventA, etc. This data is coming from multiple sources, so it is not completely linear. It needs to be congregated together, which is where I'm having the issue.
Currently the data looks something like this:
# | event | next
--------------------------
12:0 | EventA | 12:1
12:1 | EventB | 12:2
12:2 | EventC |
12:3 | EventA | 12:4
12:4 | EventD |
Where "next" is the out() edge to the event that comes next in the time-flow. On a graph this comes out to look like:
EventA-->EventB-->EventC
EventA-->EventD
Since this data needs to be congregated together, I need to merge duplicate events but preserve their edges. In other words, I need a select query that will result in:
-->EventB-->EventC
EventA--|
-->EventD
In this example, since EventB and EventD both occurred after EventA (just at different times), the select query will show two branches off EventA as opposed to two separate time-flows.
EDIT #2
If an additional set of data were to be added to the data above, with EventB->EventE, the resulting data/graph would look like:
# | event | next
--------------------------
12:0 | EventA | 12:1
12:1 | EventB | 12:2
12:2 | EventC |
12:3 | EventA | 12:4
12:4 | EventD |
12:5 | EventB | 12:6
12:6 | EventE |
EventA-->EventB-->EventC
EventA-->EventD
EventB-->EventE
I need a query to produce a tree like:
-->EventC
-->EventB--|
| -->EventE
EventA--|
-->EventD
EDIT #3 and #4
Here is the data with edges shown as opposed to the "next" column above. I also added a couple additional columns here to hopefully clear up any confusion about the data:
# | event | ip_address | timestamp | in | out |
----------------------------------------------------------------------------
12:0 | EventA | 123.156.189.18 | 2015-04-17 12:48:01 | | 13:0 |
12:1 | EventB | 123.156.189.18 | 2015-04-17 12:48:32 | 13:0 | 13:1 |
12:2 | EventC | 123.156.189.18 | 2015-04-17 12:48:49 | 13:1 | |
12:3 | EventA | 103.145.187.22 | 2015-04-17 14:03:08 | | 13:2 |
12:4 | EventD | 103.145.187.22 | 2015-04-17 14:05:23 | 13:2 | |
12:5 | EventB | 96.109.199.184 | 2015-04-17 21:53:00 | | 13:3 |
12:6 | EventE | 96.109.199.184 | 2015-04-17 21:53:07 | 13:3 | |
The data is saved like this to preserve each individual event and the flow of a session (labeled by the ip address).
TL;DR
Got lots of events, some duplicates, and need them all organized into one neat time-flow graph.
Holy cow.
After wrestling with this for over a week I think I FINALLY have a working function. This isn't optimized for performance (oh the loops!), but gets the job done for the time being while I can work on performance. The resulting OrientDB server-side function (written in javascript):
The function:
// Clear previous runs
db.command("truncate class tmp_Then");
db.command("truncate class tmp_Events");
// Get all distinct events
var distinctEvents = db.query("select from Events group by event");
// Send 404 if null, otherwise proceed
if (distinctEvents == null) {
response.send(404, "Events not found", "text/plain", "Error: events not found" );
} else {
var edges = [];
// Loop through all distinct events
distinctEvents.forEach(function(distinctEvent) {
var newEvent = [];
var rid = distinctEvent.field("#rid");
var eventType = distinctEvent.field("event");
// The main query that finds all *direct* descendents of the distinct event
var result = db.query("select from (traverse * from (select from Events where event = ?) where $depth <= 2) where #class = 'Events' and $depth > 1 and #rid in (select from Events group by event)", [eventType]);
// Save the distinct event in a temp table to create temp edges
db.command("create vertex tmp_Events set rid = ?, event = ?", [rid, event]);
edges.push(result);
});
// The edges array defines which edges should exist for a given event
edges.forEach(function(edge, index) {
edge.forEach(function(e) {
// Create the temp edge that corresponds to its distinct event
db.command("create edge tmp_Then from (select from tmp_Events where rid = " + distinctEvents[index].field("#rid") + ") to (select from tmp_Events where rid = " + e.field("#rid") + ")");
});
});
var result = db.query("select from tmp_Events");
return result;
}
Takeaways:
Temp tables appeared to be necessary. I tried to do this without temp tables (classes), but I'm not sure it could be done. I needed to mock edges that didn't exist in the raw data.
Traverse was very helpful in writing the main query. Traversing through an event to find its direct, unique descendents was fairly simple.
Having the ability to write stored procs in Javascript is freaking awesome. This would have been a nightmare in SQL.
omfg loops. I plan to optimize this and continue to make it better so hopefully other people can find some use for it.

Rails, SQL: private chat, how to find last message in each conversation

I'v got the folowing schema
+----+------+------+-----------+---------------------+--------+
| id | from | to | message | timestamp | readed |
+----+------+------+-----------+---------------------+--------+
| 46 | 2 | 6 | 123 | 2013-11-19 19:12:19 | 0 |
| 44 | 2 | 3 | 123 | 2013-11-19 19:12:12 | 0 |
| 43 | 2 | 1 | ????????? | 2013-11-19 18:37:11 | 0 |
| 42 | 1 | 2 | adf | 2013-11-19 18:37:05 | 0 |
+----+------+------+-----------+---------------------+--------+
from/to is the ID of the user's, message – obviously, the message, timestamp and read flag.
When user open's his profile I want him to see the list of dialogs he participated with last message in this dialog.
To find a conversation between 2 people I wrote this code, it's simple (Message model):
def self.conversation(from, to)
where(from: [from, to], to: [from, to])
end
So, I can now sort the messages and get the last one. But it's not cool to fire a lot of queries for each dialog.
How could I achieve the result I'm looking for with less queries?
UPDATE:
Ok, looks like it's not really clear, what I'm trying to achieve.
For example, 4 users – Kitty, Dandy, Beggy and Brucy used that chat.
When Brucy entered in dialogs, she shall see
Beggy: hello brucy haw ar u! | <--- the last message from beggy
-------
Dandy: Hi brucy! | <---- the last message from dandy
--------
Kitty: Hi Kitty, my name is Brucy! | <–– this last message is from current user
So, three separated dialogs. Then, Brucy can enter anyone dialog to continue private conversation.
And I can't figured out how could I fetch this records without firing a query for each dialog between users.
This answer is a bit late, but there doesn't seem to be a great way to do this, in Rails 3.2.x at least.
However, here is the solution I came up with
(as I had the same problem on my website).
#sender_ids =
Message.where(recipient_id: current_user.id)
.order("created_at DESC")
.select("DISTINCT owner_id")
.paginate(per_page: 10, page: params[:page])
sql_queries =
#sender_ids.map do |user|
user_id = user.owner_id
"(SELECT * FROM messages WHERE owner_id = #{user_id} "\
"AND recipient_id = #{current_user.id} ORDER BY id DESC "\
"LIMIT 1)"
end.join(" UNION ALL ")
#messages = Message.find_by_sql(sql_queries)
ActiveRecord::Associations::Preloader.new(#messages, :owner).run
This gets the last 10 unique people you sent messages to.
For each of those people, it creates a UNION ALL query to get the last message sent to each of those 10 unique people. With 50, 000 rows, the query completes in about ~20ms. And of course to get assocations to preload, you have to use .includes will not work when using .find_by_sql
def self.conversation(from, to)
order("timestamp asc").last
end
Edit:
This railscast will be helpful..
http://railscasts.com/episodes/316-private-pub?view=asciicast
EDIT2:
def self.conversation(from, to)
select(:from, :to, :message).where(from: [from, to], to: [from, to]).group(:from, :to, :country).order("timestamp DESC").limit(1)
end

Date Join Query with Calculated Fields

I'm creating an Access 2010 database to replace an old Paradox one. Just now getting to queries, and there is no hiding that I am a new to SQL.
What I am trying to do is set up a query to be used by a graph. The graph's Y axis is to be a simple percentage passed, and the X axis is a certain day. The graph will be created on form load and subsequent new records entered with a date range of "Between Date() And Date()-30" (30 days, rolling).
The database I'm working with can have multiple inspections per day with multiple passes and multiple fails. Each inspection is a separate record.
For instance, on 11/26/2012 there were 7 inspections done; 5 passed and 2 failed, a 71% ((5/7)*100%) acceptance. The "11/26/2012" and "71%" represent a data point on the graph. On 11/27/2012 there were 8 inspections done; 4 passed and 4 failed, a 50% acceptance. Etc.
Here is an example of a query with fields "Date" and "Disposition" of date range "11/26/2012 - 11/27/2012:"
SELECT Inspection.Date, Inspection.Disposition
FROM Inspection
WHERE (((Inspection.Date) Between #11/26/2012# And #11/27/2012#) AND ((Inspection.Disposition)="PASS" Or (Inspection.Disposition)="FAIL"));
Date | Disposition
11/26/2012 | PASS
11/26/2012 | FAIL
11/26/2012 | FAIL
11/26/2012 | PASS
11/26/2012 | PASS
11/26/2012 | PASS
11/26/2012 | PASS
11/27/2012 | PASS
11/27/2012 | PASS
11/27/2012 | FAIL
11/27/2012 | PASS
11/27/2012 | FAIL
11/27/2012 | PASS
11/27/2012 | FAIL
11/27/2012 | FAIL
*NOTE - The date field is of type "Date," and the Disposition field is of type "Text." There are days where no inspections are done, and these days are not to show up on the graph. The inspection disposition can also be listed as "NA," which refers to another type of inspection not to be graphed.
Here is the layout I want to create in another query (again, for brevity, only 2 days in range):
Date | # Insp | # Passed | # Failed | % Acceptance
11/26/2012 | 7 | 5 | 2 | 71
11/27/2012 | 8 | 4 | 4 | 50
What I think needs to be done is some type of join on the record dates themselves and "calculated fields" in the rest of the query results. The problem is
that I haven't found out how to "flatten" the records by date AND maintain a count of the number of inspections and the number passed/failed all in one query. Do I need multiple layered queries for this? I prefer not to store any of the queries as tables as the only use of these numbers is in graphical form.
I was thinking of making new columns in the database to get around the "Disposition" field being Textual by assigning a PASS "1" and a FAIL "0," but this seems like a cop-out. There has to be a way to make this work in SQL, just I haven't found applicable examples.
Thanks for your help! Any input or suggestions are appreciated! Example databases with forms, queries, and graphs are also helpful!
You could group by Date, and then use aggregates like sum and count to calculate statistics for that group:
select Date
, count(*) as [# Insp]
, sum(iif(Disposition = 'PASS',1,0)) as [# Passed]
, sum(iif(Disposition = 'FAIL',1,0)) as [# Failed]
, 100.0 * sum(iif(Disposition = 'PASS',1,0)) / count(*) as [% Acceptance]
from YourTable
where Disposition in ('PASS', 'FAIL')
group by
Date