Dashboard table: get count of subquery - splunk

I've been struggling with this for a few hours.
I am trying to build a simple table that will have the following:
Result of a search containing an id and a date
Count of a subquery using the id of the search in 1
I have two working queries that achieve what I want individually, but when I try to use a join I lose the data of the query for #2.
Query #1: index= source="" "" AND parentId=1574 | fields childId, date
-> returns 333,
Query #2: index= source="" "" AND childId=333 | chart count as childCount | fields childCount
-> returns the count, 4 for example
When I try to join the two using something like this, I lose the count:
index=<redacted> source="<redacted" "<some query text>" AND
parentId=1574
| fields childId, date
| join type=left childId [
search index=<redacted> source="<redacted>" "<some query text>" AND
childId=333 | chart count as childCount | fields childCount
]
| table artifactId, childCount, date
I've also tried an outer join, append, etc. to no avail. The count could be 0 as well.
Any help would be appreciated,
Thanks!

The data from the second search is lost because the search does not return the 'childId' field expected by the join. It returns only the 'childCount' field specified in the fields command. Try fields childId childCount.

Related

Splunk - I want to add a value from stats count() to a value from a lookup table and show that value in a table

The objective of the query im trying to write is to take a count of raw data from the previous month and add that to a count from a lookup table (.csv)
What I have attempted to do is…
index=*** source=***
| stats count(_raw) as monthCount
| join
[ | inputlookup Log_Count_YTD.csv]
| eval countYTD = toNumber(monthCount) + toNumber(TOTAL_COUNT_YTD)
| table countYTD
This query doesn’t return any value on a table. The TOTAL_COUNT_YTD is the only field from the inputlookup file. Let me know if there is any other information you need to help me out with this one. Thanks!
The stats command transforms the data so it has only 1 field: monthCount. The inputlookup returns only the TOTAL_COUNT_YTD field. The join command works by comparing values of common fields between the main search and the subsearch. Since there are no common fields no events are joined.
There is no need for join in this case. The appendcols command will do, assuming the CSV contains a single field in a single row.
index=*** source=***
| stats count() as monthCount
| appendcols
[ | inputlookup Log_Count_YTD.csv]
| eval countYTD = toNumber(monthCount) + toNumber(TOTAL_COUNT_YTD)
| table countYTD
FWIW, the tonumber function is unnecessary, but doesn't hurt.

Get total count and first 3 columns

I have the following SQL query:
SELECT TOP 3 accounts.username
,COUNT(accounts.username) AS count
FROM relationships
JOIN accounts ON relationships.account = accounts.id
WHERE relationships.following = 4
AND relationships.account IN (
SELECT relationships.following
FROM relationships
WHERE relationships.account = 8
);
I want to return the total count of accounts.username and the first 3 accounts.username (in no particular order). Unfortunately accounts.username and COUNT(accounts.username) cannot coexist. The query works fine removing one of the them. I don't want to send the request twice with different select bodies. The count column could span to 1000+ so I would prefer to calculate it in SQL rather in code.
The current query returns the error Column 'accounts.username' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. which has not led me anywhere and this is different to other questions as I do not want to use the 'group by' clause. Is there a way to do this with FOR JSON AUTO?
The desired output could be:
+-------+----------+
| count | username |
+-------+----------+
| 1551 | simon1 |
| 1551 | simon2 |
| 1551 | simon3 |
+-------+----------+
or
+----------------------------------------------------------------+
| JSON_F52E2B61-18A1-11d1-B105-00805F49916B |
+----------------------------------------------------------------+
| [{"count": 1551, "usernames": ["simon1", "simon2", "simon3"]}] |
+----------------------------------------------------------------+
If you want to display the total count of rows that satisfy the filter conditions (and where username is not null) in an additional column in your resultset, then you could use window functions:
SELECT TOP 3
a.username,
COUNT(a.username) OVER() AS cnt
FROM relationships r
JOIN accounts a ON r.account = a.id
WHERE
r.following = 4
AND EXISTS (
SELECT 1 FROM relationships t1 WHERE r1.account = 8 AND r1.following = r.account
)
;
Side notes:
if username is not nullable, use COUNT(*) rather than COUNT(a.username): this is more efficient since it does not require the database to check every value for nullity
table aliases make the query easier to write, read and maintain
I usually prefer EXISTS over IN (but here this is mostly a matter of taste, as both techniques should work fine for your use case)

How to aggregate logs by field and then by bin in AWS CloudWatch Insights?

I'm trying to do a query that will first aggregate by field count and after by bin(1h) for example I would like to get the result like:
# Date Field Count
1 2019-01-01T10:00:00.000Z A 123
2 2019-01-01T11:00:00.000Z A 456
3 2019-01-01T10:00:00.000Z B 567
4 2019-01-01T11:00:00.000Z B 789
Not sure if it's possible though, the query should be something like:
fields Field
| stats count() by Field by bin(1h)
Any ideas how to achieve this?
Is this what you need?
fields Field | stats count() by Field, bin(1h)
If you want to create a line chart, you can do it by separately counting each value that your field could take.
fields
Field = 'A' as is_A,
Field = 'B' as is_B
| stats sum(is_A) as A, sum(is_B) as B by bin(1hour)
This solution requires your query to include a string literal of each value ('A' and 'B' in OP's example). It works as long as you know what those possible values are.
This might be what Hugo Mallet was looking for, except the avg() function won't work here so he'd have to calculate the average by dividing by a total
Not able to group by a certain field and create visualizations.
fields Field
| stats count() by Field, bin(1h)
Keep getting this message
No visualization available. Try this to get started:
stats count() by bin(30s)

In Postgres: Select columns from a set of arrays of columns and check a condition on all of them

I have a table like this:
I want to perform count on different set of columns (all subsets where there is at least one element from X and one element from Y). How can I do that in Postgres?
For example, I may have {x1,x2,y3}, {x4,y1,y2,y3},etc. I want to count number of "id"s having 1 in each set. So for the first set:
SELECT COUNT(id) FROM table WHERE x1=1 AND x2=1 AND x3=1;
and for the second set does the same:
SELECT COUNT(id) FROM table WHERE x4=1 AND y1=1 AND y2=1 AND y3=1;
Is it possible to write a loop that goes over all these sets and query the table accordingly? The array will have more than 10000 sets, so it cannot be done manually.
You should be able convert the table columns to an array using ARRAY[col1, col2,...], then use the array_positions function, setting the second parameter to be the value you're checking for. So, given your example above, this query:
SELECT id, array_positions(array[x1,x2,x3,x4,y1,y2,y3,y4], 1)
FROM tbl
ORDER BY id;
Will yield this result:
+----+-------------------+
| id | array_positions |
+----+-------------------+
| a | {1,4,5} |
| b | {1,2,4,7} |
| c | {1,2,3,4,6,7,8} |
+----+-------------------+
Here's a SQL Fiddle.

The record represented by the ID with the Highest aggregate value

I already have the code to display the highest aggregate value for a ID.
select max(fk3_job_role_id),max(sum(no_of_placements))
from fact_accounts
group by fk3_job_role_id
the result looks like:
[max(fk3_job_role_id)] | [max(sum(no_of_placements))]
-----------------------|-----------------------------
5 | 25
However, i want to display the job_role_desc instead of fk3_job_role_id represented by the same id.
The table for it looks like:
[job_role_id] | [job_role_desc]
--------------------------------
1 | job1
2 | job2
3 | job3
4 | job4
5 | job5
select job_role_desc,T.total_sum from fact_accounts where job_role_id in (select max(fk3_job_role_id),max(sum(no_of_placements)) as total_sum from fact_accounts group by fk3_job_role_id) T
You need to query for the job description by using a subquery. The above query first fetches the data according to the query inside the brackets( also known popularly as a subquery ). The result returned from this query is used to compare with all the other id's in the main table by a simple "in" clause.
Edit
If you also need the sum of placements you can get it by using a reference to the table created during the execution of the subquery