I am sending events with logstash to an elasticsearch database. An event is structured like this:
timestamp:2014-04-04 12:00:00 name:'leo' time:10
timestamp:2014-04-04 12:00:30 name:'john' time:15
...
...
In the Kibana interface, I am able to display some graphs; for example, the mean time of the timestamp.
Since logstash is continuously sending events, I would like to display the latest event sent in real time. Is it possible to write a query which will only return the latest event, using the timestamp field? I don't want to touch Kibana's "time filter"
Thanks in advance for your help
The easiest way to see the latest entry is to sort by timestamp, descending, in the 'Events' section of the Kibana interface. If you really only wanted to see one result, you could adjust the paging settings to only return one page consisting of one entry, and set the 'Auto-refresh' to a short interval (the shortest is 5s, I believe.)
Here's the Kibana documentation - might help.
Related
I've been looking at a recent event in Splunk with sourcetype WinHostMon, and I see two different values for StartTime and _time:
StartTime="20200427223006.448182-300"
_time is recorded as 2020-04-28T15:38:13.000-04:00
If the last part is timezone, there are two things that are strange about this:
The timezone for StartTime is in the middle of the Atlantic.
The times don't actually match.
Question: What is the actual time of this event, if such a thing can actually be determined, and what is causing the discrepancy between these two times?
(I tried to post this on Splunk Answers but they seem to have a labyrinth to stop people from signing up and I was unable to get an activated account.)
_time is the timestamp of the event, that is, when the event was generated or written to a log file. This is the field Splunk uses for default sorting and rendering in tables and time charts.
For WinHostMon events, most notably Process events, StartTime is when that process started.
Hence, it is not surprising that these events are significantly different. The process may have started at some point in the past, and then the WinHostMon input may generate a list of active processes every 5 minutes or so (or more or less)
_time is the timestamp of the event as defined in props.conf - or, if undefined, whenever Splunk receives the event (as often happens with untagged JSON)
The field StartTime is - so far as I can tell - not related to whatever is populating _time
If you open the Add-On's props.conf, you'll see how they're defining the timestamp and the field extraction for StartTime
Is there a way to get the latest event of a specific type from a specific device? So far I queried all events via myURL/event/events?source=<<ID>>&type=<<type>>.
Or is there a way to get a collection of events ordered by their creationTime? This could solve my problem too
In the documentation I only found parameters like dateFrom and dateTo. But what if I don't know the time range of the last event?
The syntax is: /event/events?type={type}&source={source}.
For more information on the available endpoints:
GET /platform
Answer from the support-team:
currently there is no way to revert the order of events. Besides
dateFrom and dateTo you can also use creationFrom and creationTo. This
will take the creationTime (server side timestamp when event was
created) instead of the time (that is send within the event) But the
order will still be oldest -> newest.
Best approach currently would be to use a good estimated time range
(so you don't end up with 0 events in the response) where the
dateTo/creationTo is in the future. If you add to the query params
withTotalPages=true the result will give you the total pages in the
statistics part.
Knowing the total pages you can you can do the query again but with
currentPage=XX instead of withTotalPages=true and take the last
element.
We have this functionality on measurements and audits where you can
add the parameter revert=true. I will add an improvement that we
extend this to the other APIs but at the moment you are limited to the
workaround.
You can just only set the dateFrom parameter and the pageSize to 1 like so: &pageSize=1&dateFrom=1970-01-01. As of September 2017, this returns the most recent event.
I'm facing a very strange issue in my Splunk search. I have a data input coming from a REST API that returns a multi-level (nested) JSON response:
The entity node has several nodes, each node represents one access point. Each access point contains a field called ipAddress.
This API is being called every 5 min and response stored in Splunk. When I do a search to get the list of IP Addresses from one event I don't get all of them. For some reason, is like Splunk is reading only the first seven nodes inside entity, because when I do:
source="rest://AccessPointDetailsAPI" | head 1
Splunk shows only the following values on the field (7 values although there are around 27):
I'm using demo license if that matters. Why I cannot see all values ? If I change my search to look for a specific iPAddress on the response but not on the list it won't return records.
Thanks and regards,
I think I understand the problem now. So the event is a big json and Splunk is not properly parsing all fields on the big json.
We need to tell splunk to parse the specific field we need with spath and specifying the field:
yoursearch | spath output=myIpAddress path=queryResponse.entity{}.accessPointDetailsDTO.ipAddress | table myIpAddress
http://docs.splunk.com/Documentation/Splunk/5.0.4/SearchReference/Spath
But I think also is important to analyze if maybe the data input needs to be divided in multiple events rather than a single huge event.
I want to play with some really simple queries for a report, but I want to group everything by the creation date. The problem I am having is that time exists in the database, but not the date. From searching around in trac-related resources, it looks like I need to install trac.util.datefmt to be able to extract this information from datetime(time). I can find the API documentation for trac.util.datefmt, but not a download link to get the .egg.
Am I going in the right direction? If I can do what I need (i.e. get the creation month/day/year) without a plugin, what column do I use? I don't see anything else in the schema that is reasonable. If I do need trac.util.datefmt, where do I download it from? And if I really need a different plugin, which one should I be using?
I'll assume Trac >= 1.0. The time column is unix epoch time: the number of seconds that have elapsed since January 1st 1970. You can divide the value by 1e6 and put the value in a converter to see an example of extracting the datetime from the time column. trac.util.datefmt is part of the egg that ships with Trac, but since you are working with reports it doesn't sound like you need to know more about that function to accomplish your aim.
In a Trac report the time column will be automatically formatted as a date. You can look at the default report {1} as an example. I'm not sure how you intend to group by creation date. Do you wish to group tickets created in a certain datetime range?
I am doing a cron style search of activities and I want to retrieve Google plus activities published after the timestamp when last search was run. How can this be done?
Current documentation seems to allow only search by keywords and doesn't talk about a timestamp range filter in search.
Here is the link to the documentation
https://developers.google.com/+/api/latest/activities/search
One way to do this would be to
a. Store the timestamp of the previous search as "previous_search_timestamp"
b. In every search, sort the results by recency (as allowed by the API)
c. Iterate over the results of current search, till you come across an activity whose published <= previous_search_timestamp
d. Stop processing the results from that activity onwards (or making further pagination requests) as the further activity results would already have been retrieved in the previous search. You don't want to make redundant API calls or data processing on your server :)