Trying to write a cloud watch insights query to concatenate error messages for the same timestamp to be displayed as one row rather than multiple rows in the result.
So far I have tried the below query.
fields #timestamp,concat(#message)
| filter #message like /(?i)(Exception|error|fail|)/
| limit 20
This displays the results are below.
2019-09-12T12:17:09.803+10:00 12:17:09,720 |-ERROR in A
2019-09-12T12:17:09.803+10:00 12:17:09,720 |-ERROR in B
2019-09-12T12:17:09.803+10:00 12:17:09,720 |-ERROR in C
I am expecting the below result.
2019-09-12T12:17:09.803+10:00 12:17:09,720 |- ERROR in A -ERROR in B -ERROR in C
The concat operator is not an aggregating function, so will not do what you are looking for.
Rather, it is used for concatinating multiple values in a single row, e.g.
fields #timestamp, concat("Got message ", #message, " from stream ", #logStream)
would give you
| 2019-09-12T12:17:09.803+10:00 12:17:09,720 | Got message bla from stream some_log_stream |
As far as I know there is no way to aggregate strings from multiple rows into a single row.
Related
I have the following AWS Cloudwatch query:
fields #timestamp, #message
| filter #message like /(?i)(error|except)/
| filter !ispresent(level) and !ispresent(eventType)
| stats count(*) as ErrorCount by #message
| sort ErrorCount desc
Results end up looking something like this with the message and a count:
The first 4 results are actualy the same error. However, since they have different (node:*) values at the beginning of the message, it ends up grouping them as different errors.
Is there a way for the query to parse/ignore the (node:*) part so that the first 4 results in the image would be considered just one result with a total count of 2,997?
My splunk result looks like this:
9/1/20
5:00:14.487 PM
2020-09-01 16:00:14.487, 'TOTALITEMS'="Number of items registered in the last 2 hours ", COUNT(*)="1339"
I am trying to table the number that appears in the end in quotes.
index=my_db sourcetype=no_of_items_registered source=P_No_of_items_registered_2hours | rex field=_raw "\"Number of items registered in the last 2 hours \", COUNT(\*)=\"(?P<itm_ct>\d+)\"$" | table itm_ct
This displays a blank table without any numbers. The number of rows in the table however matches the the number of events.
Any help much appreciated
The regular expression doesn't match the sample data. Literal parentheses must be escaped in the regex. Try this:
index=my_db sourcetype=no_of_items_registered source=P_No_of_items_registered_2hours
| rex "COUNT\(\*\)="(?<itm_ct>\d+)" | table itm_c
I am trying to use a LATERAL JOIN on a particular data set however i cannot seem to get the syntax correct for the query.
What am i trying to achieve:
Take the first column in the dataset (See picture) and use that as the Table headers (rows) and populate the rows with the data from the StringValue column
Currently it appears like this:
cfname | stringvalue |
----------------------------------------
customerrequesttype | newformsubmission|
Assignmentgroup | ITDEPT |
and I would like to have it appear as this:
customerrequesttype| Assignmentgroup|
-------------------------------------
newformsubmission | ITDEPT
As mentioned i am very new to SQL i know limited basics
I'm trying to do a query that will first aggregate by field count and after by bin(1h) for example I would like to get the result like:
# Date Field Count
1 2019-01-01T10:00:00.000Z A 123
2 2019-01-01T11:00:00.000Z A 456
3 2019-01-01T10:00:00.000Z B 567
4 2019-01-01T11:00:00.000Z B 789
Not sure if it's possible though, the query should be something like:
fields Field
| stats count() by Field by bin(1h)
Any ideas how to achieve this?
Is this what you need?
fields Field | stats count() by Field, bin(1h)
If you want to create a line chart, you can do it by separately counting each value that your field could take.
fields
Field = 'A' as is_A,
Field = 'B' as is_B
| stats sum(is_A) as A, sum(is_B) as B by bin(1hour)
This solution requires your query to include a string literal of each value ('A' and 'B' in OP's example). It works as long as you know what those possible values are.
This might be what Hugo Mallet was looking for, except the avg() function won't work here so he'd have to calculate the average by dividing by a total
Not able to group by a certain field and create visualizations.
fields Field
| stats count() by Field, bin(1h)
Keep getting this message
No visualization available. Try this to get started:
stats count() by bin(30s)
I have a table A within a dataset in Bigquery. This table has multiple columns and one of the columns called hits_eventInfo_eventLabel has values like below:
{ID:AEEMEO,Score:8.990000;ID:SEAMCV,Score:8.990000;ID:HBLION;Property
ID:DNSEAWH,Score:0.391670;ID:CP1853;ID:HI2367;ID:H25600;}
If you write this string out in a tabular form, it contains the following data:
**ID | Score**
AEEMEO | 8.990000
SEAMCV | 8.990000
HBLION | -
DNSEAWH | 0.391670
CP1853 | -
HI2367 | -
H25600 | -
Some IDs have scores, some don't. I have multiple records with similar strings populated under the column hits_eventInfo_eventLabel within the table.
My question is how can I parse this string successfully WITHIN BIGQUERY so that I can get a list of property ids and their respective recommendation scores (if existing)? I would like to have the order in which the IDs appear in the string to be preserved after parsing this data.
Would really appreciate any info on this. Thanks in advance!
I would use combination of SPLIT to separate into different rows and REGEXP_EXTRACT to separate into different columns, i.e.
select
regexp_extract(x, r'ID:([^,]*)') as id,
regexp_extract(x, r'Score:([\d\.]*)') score from (
select split(x, ';') x from (
select 'ID:AEEMEO,Score:8.990000;ID:SEAMCV,Score:8.990000;ID:HBLION;Property ID:DNSEAWH,Score:0.391670;ID:CP1853;ID:HI2367;ID:H25600;' as x))
It produces the following result:
Row id score
1 AEEMEO 8.990000
2 SEAMCV 8.990000
3 HBLION null
4 DNSEAWH 0.391670
5 CP1853 null
6 HI2367 null
7 H25600 null
You can write your own JavaScript functions in BigQuery to get exactly what you want now: http://googledevelopers.blogspot.com/2015/08/breaking-sql-barrier-google-bigquery.html