Splunk: Combining multiple chart queries to get a single table - splunk

As on today we have two queries that are running 
1st query: Count of api grouped by apiName and status
index=aws* api.metaData.pid="myAppName"
| rename api.p as apiName
| chart count BY apiName "api.metaData.status"
| multikv forceheader=1
| table apiName success error NULL
which displays a table something like shown below
=====================================
| apiName|| success || error || NULL|
=====================================
| Test1 || 10 || 20 || 0 |
| Test2 || 10 || 20 || 0 |
| Test3 || 10 || 20 || 0 |
| Test4 || 10 || 20 || 0 |
| Test5 || 10 || 20 || 0 |
| Test6 || 10 || 20 || 0 |
2nd query : latency of api grouped by apiName
index=aws* api.metaData.pid="myAppName"
| rename api.p as apiName
| rename api.measures.tt as Response_Time
| chart min(Response_Time) as RT_fastest max(Response_Time) as RT_slowest by apiName
| table apiName RT_fastest RT_slowest
which displays a table something like below
======================================
| apiName || RT_fastest || RT_slowest|
======================================
| Test1 || 141 || 20 |
| Test2 || 10 || 20 |
| Test3 || 10 || 20 |
| Test4 || 0 || 20 |
| Test5 || 10 || 20 |
| Test6 || 10 || 20 |
Question:
If you see the above tables, both tables are grouped with apiName. Is there a way to combine these queries so that i get a single result something like this
|=================================================================|
| apiName || success || error || NULL || RT_fastest|| RT_slowest |
================================================================= |
| Test1 || 10 || 20. || 20. || 20. || 20. |
| Test2 || 10 || 20. || 20. || 20. || 20. |
| Test3 || 10 || 20. || 20. || 20. || 20. |
| Test4 || 10 || 20. || 20. || 20. || 20. |
| Test5 || 10 || 20. || 20. || 20. || 20. |
| Test6 || 10 || 20. || 20. || 20. || 20. |
 
I could not find any documentation regarding combining multiple chart query into one. Could someone please help me with this. Thanks :)

The challenge here is the two queries use different groupings - apiName and status in query1 and apiName alone in query2. Simply combining the two chart commands is not possible.
We can, however, append the second query to the first and then merge the results. Try this:
index=aws* api.metaData.pid="myAppName"
| rename api.p as apiName
| chart count BY apiName "api.metaData.status"
| multikv forceheader=1
| table apiName success error NULL
| append [ search index=aws* api.metaData.pid="myAppName"
| rename api.p as apiName
| rename api.measures.tt as Response_Time
| chart min(Response_Time) as RT_fastest max(Response_Time) as RT_slowest by apiName
| table apiName RT_fastest RT_slowest ]
| stats values(*) as * by apiName
| table apiName success error NULL RT_fastest RT_slowest

Related

How to transpose columns to rows of a table dynamically using a DB2 sql code where columns may increase over time and no need to change the code?

I have a table where columns will eventually increase over time. I want to write a query that transposes the table even later columns increases and no need to add extra line of code to achieve it. I need to transpose those columns where value is 'Y'
Eg: Source data on day 1
| Emp_ID | DOC 1 | DOC 2 |
| ------ |-------|-------|
| 001 | Y |Y |
| 002 | N |Y |
Day 1 output
| Emp_ID | Transposed |
| ------ |-------|
| 001 | DOC 1 |
| 001 | DOC 2 |
| 002 | DOC 2 |
now eventually the columns may increase, and want the same query block to handle it without any change in code, can we?
Source data on day 2
| Emp_ID | DOC 1 | DOC 2 | DOC 3|
| ------ |-------|-------|------|
| 001 | Y |Y |N |
| 002 | N |Y |Y |
| 003 | N |N |N |
Day 2 output
| Emp_ID | Transposed |
| ------ |-------|
| 001 | DOC 1 |
| 001 | DOC 2 |
| 002 | DOC 2 |
| 002 | DOC 3 |
**Note have considered only docs having Y as a value. Thanks in advance**
You need to construct the following statement dynamically for given base table
MYSCHEMA.MYTAB (EMP_ID INT, DOC1 CHAR, ..., DOCn CHAR):
SELECT T.EMP_ID, V.COLNAME
FROM
MYSCHEMA.MYTAB T
, (
SELECT COLNAME
FROM SYSCAT.COLUMNS
WHERE TABSCHEMA = 'MYSCHEMA' AND TABNAME = 'MYTAB' AND COLNAME <> 'EMP_ID' AND TYPENAME LIKE '%CHAR%'
) V
WHERE
V.COLNAME = 'DOC1' AND T.DOC1 = 'Y'
...
OR V.COLNAME = 'DOCn' AND T.DOCn = 'Y'
The statement has dynamic WHERE part only. The sub-select on SYSCAT.COLUMNS returns all the table columns to transpose (all table columns except EMP_ID).
The following SELECT INTO statement inside the table function generates the final statement needed for whatever number of such columns.
CREATE OR REPLACE FUNCTION MYFUNC ()
RETURNS TABLE (EMP_ID INT, TRANSPOSED VARCHAR (128))
BEGIN
DECLARE V_SQL VARCHAR (4000);
DECLARE V_EMP_ID INT;
DECLARE V_TRANSPOSED VARCHAR (128);
DECLARE SQLSTATE CHAR(5);
DECLARE C1 CURSOR FOR S1;
SELECT
'SELECT T.EMP_ID, V.COLNAME '
|| 'FROM '
|| ' MYSCHEMA.MYTAB T '
|| ', ( '
|| 'SELECT COLNAME '
|| 'FROM SYSCAT.COLUMNS '
|| 'WHERE TABSCHEMA = ''MYSCHEMA'' AND TABNAME = ''MYTAB'' AND COLNAME <> ''EMP_ID'' AND TYPENAME LIKE ''%CHAR%'''
|| ' ) V '
|| 'WHERE '
|| LISTAGG ('V.COLNAME = ''' || COLNAME || ''' AND T.' || COLNAME || ' = ''Y''', ' OR ')
INTO V_SQL
FROM SYSCAT.COLUMNS
WHERE TABSCHEMA = 'MYSCHEMA' AND TABNAME = 'MYTAB' AND COLNAME <> 'EMP_ID' AND TYPENAME LIKE '%CHAR%';
PREPARE S1 FROM V_SQL;
OPEN C1;
L1: LOOP
FETCH C1 INTO V_EMP_ID, V_TRANSPOSED;
IF SQLSTATE = '02000' THEN LEAVE L1; END IF;
PIPE (V_EMP_ID, V_TRANSPOSED);
END LOOP L1;
CLOSE C1;
RETURN;
END#

index match in SQL

Sorry in advance. I am very new in SQL. I don't know how to do a simple task like excel index match equivalent in SQL.
I have 2 tables in SQL (Please ignore the dash, I was using it to align the columns)
Table1
| Name | Limit1 || Limit2 |
| First | A || 05 |
| Second | B || 10 |
| Third | || 10 |
Table2
| Limit1Key|| Limit1Value || Limit2Key ||Limit2Value|
| A || 20,000 || 02 ||2,000,000 |
| B || 50,000 || 05 ||5,000,000 |
| || || 10 ||10,000,000 |
I want to get a final table looking like below.
Result Table
| Name || Limit1 || Limit2 |
| First || 20,000 || 5,000,000 |
| Second || 50,000 || 10,000,000 |
| Third || || 10,000,000 |
If there is already another post similar to this, please guide me to it.
Thank you!
If I understand correctly, you just want two joins:
select t1.*, t2_1.limit1value, t2_2.limit2_value
from table1 t1 left join
table2 t2_1
on t2_1.limit1key = t1.limit1 left join
table2 t2_2
on t2_2.limit2key = t1.limit2;

New column based on the value of another column | Oracle?

In an Oracle 11g database I have table called organizations which looks like this:
| ORGANIZATION_ID | ORGANIZATION_NAME | TREE_ORGANIZATION_ID | ORGANIZATION_RANG |
|-----------------|-------------------|----------------------|-------------------|
| 1 | Facebook | \1 | 1 |
| 2 | Instagram | \1\2 | 2 |
| 3 | Whatsapp | \1\3 | 2 |
| 4 | Alphabet | \4 | 1 |
| 5 | Nest | \4\5 | 2 |
| 6 | Google | \4\6 | 2 |
| 7 | YouTube | \4\6\7 | 3 |
As you can see this table has column called TREE_ORGANIZATION_ID where I store information about relationship of organizations.
This code returns all organizations that have a specific ID in the column TREE_ORGANIZATION_ID. In my case this code return Google and YouTube entries.
SELECT
*
FROM
ORGANIZATIONS
WHERE
TREE_ORGANIZATION_ID LIKE '%\' || '6'
OR
TREE_ORGANIZATION_ID LIKE '%\' || '6' || '\%';
I want to add new column called STATUS which looks like this:
| ORGANIZATION_ID | ORGANIZATION_NAME | TREE_ORGANIZATION_ID | ORGANIZATION_RANG | STATUS |
|-----------------|-------------------|----------------------|-------------------|----------|
| 6 | Google | \4\6 | 2 | root |
| 7 | YouTube | \4\6\7 | 3 | not root |
I tried next code but it raise error ORA-00937 not a single-group group function.
How do I create a new column based on the value of another column?
SELECT
ORGANIZATION_ID,
ORGANIZATION_NAME,
TREE_ORGANIZATION_ID,
CASE
WHEN ORGANIZATION_RANG = MIN(ORGANIZATION_RANG) THEN 'root'
ELSE 'not root'
END AS STATUS
FROM
ORGANIZATIONS
WHERE
TREE_ORGANIZATION_ID LIKE '%\' || '6'
OR
TREE_ORGANIZATION_ID LIKE '%\' || '6' || '\%';
You can try below -
SELECT
ORGANIZATION_ID,
ORGANIZATION_NAME,
TREE_ORGANIZATION_ID,
CASE
WHEN TREE_ORGANIZATION_ID LIKE '%\' || '6' THEN 'root'
when TREE_ORGANIZATION_ID LIKE '%\' || '6' || '\%' then 'not root'
END AS STATUS
FROM
ORGANIZATIONS
WHERE
TREE_ORGANIZATION_ID LIKE '%\' || '6'
OR
TREE_ORGANIZATION_ID LIKE '%\' || '6' || '\%'
You want to use an analytic function, not an aggregation function:
SELECT ORGANIZATION_ID, ORGANIZATION_NAME, TREE_ORGANIZATION_ID,
(CASE WHEN ORGANIZATION_RANG = MIN(ORGANIZATION_RANG) OVER ()
THEN 'root'
ELSE 'not root'
END) AS STATUS
FROM ORGANIZATIONS O
WHERE TREE_ORGANIZATION_ID || '\' LIKE '%\' || '6' || '\%';
Note that this also simplifies the logic for matching 6 by testing the organization id with a backslash on the end. You could also use REGEXP_LIKE() for this purpose.

How to perform the same aggregation on every column, without listing the columns?

I have a table with N columns. Let's call them c1, c2, c3, c4, ... cN. Among multiple rows, I want to get a single row with COUNT DISTINCT(cX) for each X in [1, N].
c1 | c2 | ... | cn
0 | 4 | ... | 1
Is there a way I can do this (in a stored procedure) without writing every column name into the query manually?
Why?
We've had a problem where bugs in application servers mean we rewrite good column values with garbage inserted later. To solve this, I'm storing the information log-structure, where each row represents a logical UPDATE query. Then, when given a signal that the record is complete, I can determine if any values were (erroneously) overwritten.
An example of a single correct record in multiple rows: there is at most one value for each column.
| id | initialize_time | start_time | end_time |
| 1 | 12:00am | NULL | NULL |
| 1 | 12:00am | 1:00pm | NULL |
| 1 | 12:00am | NULL | 2:00pm |
Reconciled row:
| 1 | 12:00am | 1:00pm | 2:00pm |
An example of an irreconcilable record that I want to detect:
| id | initialize_time | start_time | end_time |
| 1 | 12:00am | NULL | NULL |
| 1 | 12:00am | 1:00pm | NULL |
| 1 | 9:00am | 1:00pm | 2:00pm | -- New initialize time => irreconcilable!
You need dynamic SQL for that, which means you have to create a function or run a DO command. Since you cannot return values directly from the latter, a plpgsql function it is:
CREATE OR REPLACE function f_count_all(_tbl text
, OUT columns text[]
, OUT counts bigint[])
RETURNS record LANGUAGE plpgsql AS
$func$
BEGIN
EXECUTE (
SELECT 'SELECT
ARRAY[' || string_agg('''' || quote_ident(attname) || '''', ', ') || ']
, ARRAY[' || string_agg('count(' || quote_ident(attname) || ')' , ', ') || ']
FROM ' || _tbl
FROM pg_attribute
WHERE attrelid = _tbl::regclass
AND attnum >= 1 -- exclude tableoid & friends (neg. attnum)
AND NOT attisdropped -- exclude deleted columns
GROUP BY attrelid
)
INTO columns, counts;
END
$func$;
Call:
SELECT * FROM f_count_all('myschema.mytable');
Returns:
columns | counts
--------------+--------
{c1, c2, c3} | {17, 1, 0}
More explanation and links about dynamic SQL and EXECUTE in this related question - or a couple more here on SO, try this search.
Related:
Count values for every column in a table
You could even try and return a polymorphic record type to get single columns dynamically, but that's rather complex and advanced. Probably too much effort for your case. More in this related answer.

Concatenate three columns data into one column in Postgres

Can anyone tell me which command is used for concatenate three columns data into one column in PostgreSQL database?
e.g.
If the columns are
begin | Month | Year
12 | 1 | 1988
13 | 3 | 1900
14 | 4 | 2000
15 | 5 | 2012
result like
Begin
12-1-1988
13-3-1900
14-4-2000
15-5-2012
Just use concatenation operator || : http://www.sqlfiddle.com/#!1/d66bb/2
select begin || '-' || month || '-' || year as begin
from t;
Output:
| BEGIN |
-------------
| 12-1-1988 |
| 13-3-1900 |
| 14-4-2000 |
| 15-5-2012 |
If you want to change the begin column itself, begin column must be of string type first, then do this: http://www.sqlfiddle.com/#!1/13210/2
update t set begin = begin || '-' || month || '-' || year ;
Output:
| BEGIN |
-------------
| 12-1-1988 |
| 13-3-1900 |
| 14-4-2000 |
| 15-5-2012 |
UPDATE
About this:
but m not getting null value column date
Use this:
select (begin || '-' || month || '-' || year)::date as begin
from t
Have a look at 9.4. String Functions and Operators
This is an old post, but I just stumbled upon it. Doesn't it make more sense to create a date data type? You can do that using:
select make_date(year, month, begin)
A date seems more useful than a string (and you can even format it however you like using to_char()).