Since we're already using Graylog (version 2.4.6) as a general purpose logging backend for our project, we thought we might as well also use it to monitor resource use. The three major benefits would be:
No need to change our codebase to add additional libraries.
Easy to create charts and graphs for the metrics we're tracking.
Built-in notifications.
Concretely, we're trying to track how many jobs our various Beanstalk server has in each of its tubes. If a given tube accumulates for than a certain amount of jobs, we would like to be alerted.
Here's a typical message that we're using for a given tube:
{
"count" => $totalJobsInTube,
"tube" => $tubeName,
"env" => $env,
}
I can't think of a way to set up an alert condition in Graylog that allows me to specify a query + which field to look at. The only conditions we have are:
Field content alert condition
Field aggregation alert condition
Message count alert condition
Message conditional count alert condition
Can this even be done i Graylog?
Graylog is using Elasticsearch as a backend, which is not a good system for metrics (time series data), it's not efficient and doesn't scale well with time series data. This is the reason that most use another monitoring system for measuring resources and other time series data. It depends on your stack, but there are lots of open source and commercial offerings to do that.
If you wanted to do logs and metrics together I would suggest using open source software the Elasic Stack can do both, but that is only my reccomendation if you have limited numbers of metrics. Splunk and SumoLogic can also do logs and metrics, but they are not ideal for time series, especially large numbers of them.
Related
We are collecting network traffic from switches using Zeek in the form of ‘connection logs’. The connection logs are then stored in Elasticsearch indices via filebeat. Each connection log is a tuple with the following fields: (source_ip, destination_ip, port, protocol, network_bytes, duration) There are more fields, but let’s just consider the above fields for simplicity for now. We get 200 million such logs every hour for internal traffic. (Zeek allows us to identify internal traffic through a field.) We have about 200,000 active IP addresses.
What we want to do is digest all these logs and create a graph where each node is an IP address, and an edge (directed, sourcedestination) represents traffic between two IP addresses. There will be one unique edge for each distinct (port, protocol) tuple. The edge will have properties: average duration, average bytes transferred, number of logs histogram by the hour of the day.
I have tried using Elasticsearch’s aggregation and also the newer Transform technique. While both work in theory, and I have tested them successfully on a very small subset of IP addresses, the processes simply cannot keep up for our entire internal traffic. E.g. digesting 1 hour of logs (about 200M logs) using Transform takes about 3 hours.
My question is:
Is post-processing Elasticsearch data the right approach to making this graph? Or is there some product that we can use upstream to do this job? Someone suggested looking into ntopng, but I did not find this specific use case in their product description. (Not sure if it is relevant, but we use ntop’s PF_RING product as a Frontend for Zeek). Are there other products that does the job out of the box? Thanks.
What problems or root causes are you attempting to elicit with graph of Zeek east-west traffic?
Seems that a more-tailored use case, such as a specific type of authentication, or even a larger problem set such as endpoint access expansion might be a better use of storage, compute, memory, and your other valuable time and resources, no?
Even if you did want to correlate or group on Zeek data, try to normalize it to OSSEM, and there would be no reason to, say, collect tuple when you can collect community-id instead. You could correlate Zeek in the large to Suricata in the small. Perhaps a better data architecture would be VAST.
Kibana, in its latest iterations, does have Graph, and even older version can lever the third-party kbn_network plugin. I could see you hitting a wall with 200k active IP addresses and Elasticsearch aggregations or even summary indexes.
Many orgs will build data architectures beyond the simple Serving layer provided by Elasticsearch. What I have heard of would be a Kappa architecture streaming into the graph database directly, such as dgraph, and perhaps just those edges of the graph available from a Serving layer.
There are other ways of asking questions from IP address data, such as the ML options in AWS SageMaker IP Insights or the Apache Spot project.
Additionally, I'm a huge fan of getting the right data only as the situation arises, although in an automated way so that the puzzle pieces bubble up for me and I can simply lock them into place. If I was working with Zeek data especially, I could lever a platform such as SecurityOnion and its orchestrated Playbook engine to kick off other tasks for me, such as querying out with one of the Velocidex tools, or even cross correlating using the built-in Sigma sources.
I am consuming a BigQuery table datasource. It is 'unbounded' as it is updated via a batch process. It contains session keyed reporting data from server logs where each row captures a request. I do not have access to the original log data and must consume the BigQuery table.
I would like to develop a custom Java based google Dataflow template using beam api with the goals of :
collating keyed session objects
deriving session level metrics
deriving filterable window level metrics based on session metrics, e.g., percentage of sessions with errors during previous window and percentage of errors per filtered property, e.g., error percentage per device type
writing the result as a formatted/compressed report to cloud storage.
This seems like a fairly standard use case? In my research thus far, I have not yet found a perfect example and still have not been able to determine the best practice approach for certain basic requirements. I would very much appreciate any pointers. Keywords to research? Documentation, tutorials. Is my current thinking right or do I need to consider other approaches?
Questions :
beam windowing and BigQuery I/O Connector - I see that I can specify a window type and size via beam api. My BQ table has a timestamp field per row. Am I supposed to somehow pass this via configuration or is it supposed to be automagic? Do I need to do this manually via a SQL query somehow? This is not clear to me.
fixed time windowing vs. session windowing functions - examples are basic and do not address any edge cases. My sessions can last hours. There are potentially 100ks plus session keys per window. Would session windowing support this?
BigQuery vs. BigQueryClientStorage - The difference is not clear to me. I understand that BQCS provides a performance benefit, but do I have to store BQ data in a preliminary step to use this? Or can I simply query my table directly via BQCS and it takes care of that for me?
For number 1 you can simply use a withTimestamps function before applying windowing, this assigns the timestamp to your items. Here are some python examples.
For number 2 the documentation states:
Session windowing applies on a per-key basis and is useful for data that is irregularly distributed with respect to time. [...] If data arrives after the minimum specified gap duration time, this initiates the start of a new window.
Also in the java documentation, you can only specify a minimum gap duration, but not a maximum. This means that session windowing can easily support hour-lasting sessions. After all, the only thing it does is putting a watermark on your data and keeping it alive.
For number 3, the differences between the BigQuery IO Connector and the BigQuery storage APIs is that the latter (an experimental feature as of 01/2020) access directly data stored, without the logical passage through BigQuery (BigQuery data isn't stored in BigQuery). This means that with storage APIs, the documentation states:
you can't use it to read data sources such as federated tables and logical views
Also, there are different limits and quotas between the two methods, that you can find in the documentation link above.
With the new beta BI Engine Reservations, I've noticed some queries speed up, but others remain unaffected. Will it be possible
- to monitor how the reservation is being used?
- to have some control over how the reservation is used?
When it comes to control, I've seen no indication that you'll have any—the system decides what the most efficient mechanism is (BI Engine, query cache, etc.) and then allocates accordingly. Also, the size of your reservation, usage, and age are factored into what is added and subsequently removed from the BI Engine reservation.
While that may seem frustrating, it's also the selling point: zero-config, automatic acceleration of your dashboards. As Google iterates quickly on these products, I would expect some controls to find their way in eventually.
As a workaround, you could use a separate project for data you want to ensure has access to the full reservation (since BI Engine is project-level).
As was mentioned elsewhere, there are a handful of metrics that can be viewed using Stackdriver logging (if you enable it). These are all high-level metrics, and are listed in the documentation:
Reservation Total Bytes
Reservation Used Bytes
Inflight Requests
Request Count
Request Execution Times
These won't likely give you a lot of the information you're looking for, but can be monitored for patterns.
You can use the elasticsearch and logstash for monitoring and implementing a security enviroment. The way with works is simple and for Near Real Time.
I have the following use case for which I'm trying to find an optimal use of either filsystem, database (rdbms or a flavour of noSql solution). Any advice is welcome, as I want to see what is optimal.
Client application: will generate logs intervals of 1-3 seconds. By logs I mean structured log data (either about connections, applications used, processes used, screenshots, etc..). Some log data will be structured, some will be unstructured (where the schema can change thus).
Storage solution: will need to persist all this data very fast. Will sit on 1-* server(s). It doesn't matter if it's a hybrid solution between filesystem/rdbms/(any suitable flavour of) noSql.
Post processing: the data needs to be queryable ofcourse. E.g. just a key-value store would not suffice, that's a given (maybe for the screenshots only yes).
As a reference, here's a more concrete example:
User runs the client for 2-3 hours (during a "monitoring period"). It sends log data over the wire to the server (storage). Writing speed and data accuracy is vital here.
Management system accumulates the data and makes a report on certain characteristics. All log data should be able to be fetched if needed - but there will be a specific query for a set of users in a given monitoring period. Reading speed is less necessary here, but data accuracy and finding all log parts back eventually is necessary.
If I need to give more information, please let me know.
If you prefer to roll your own rather than use logging packages, I would stick with append only text files. You can certainly encode screenshots in Base64 and keep it in the same file, but I would rather store that separately in the file system with a generated filename stored in the log.
As for reporting, you can obviously read it through a text editor, but if you need a more sophisticated and regular management reporting, you can create an ETL of only the info you report on into a RDBMS. You can always go back and rerun ETL if you decide that you want more info later on.
I want to display some contents of the system log using the Apple System Log API. Is there a way to query the log database incrementally, ie. only get the messages that are new since the last query? The aslresponse_next call only seems to iterate over messages that existed at the time of the query.
I think I could fake incremental querying by setting a minimum time filter, but the approach seems prone to various timing issues, so I’d rather have something more official if possible.