syslog-ng control the log from printing multiple time with in given time frame - syslog-ng

I am new to using syslog-ng, just wondering if syslog-ng provides a way to control the log if the same event is occurred avoid printing the log multiple times.

Yes, it does.
For example, the suppress() option can be used for such purposes:
https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.26/administration-guide/suppress

Related

Adding fields to Cloudwatch without using JSON

So I have typical run of the mill logs from Nginx and tomcat servers which are just single line text files with typical log format. I have changed the tomcat access logs to output pipe delimited fields so I can easily process them using some unix scripts. I'd like to get rid of my unix scripts and move to using cloudwatch to process my logs in a similar manner, however I found out that cloudwatch really doesn't understand anything beyond timestamp, message, and logstream by default.
It will add fields using JSON, but JSON is verbose when it comes to log files. I'd like to just let it process a CSV file which seems like an obvious alternative to JSON. I'm willing to change my log format to meet a requirement like that, but I can't find any information about how I could do that.
Is my only option to translate my logs into JSON in order to add fields to cloudwatch? I am aware of the parse command, but I find that cumbersome to reconstitute my fields every time I want to build a query. Especially since these will mostly be access logs which will have numerous fields. I have aws cloudwatch log agent setup on my systems and I'm currently sending these logs to cloudwatch.
The closest thing there is to handling space delimited log files is to use Metric Filters. Or at least that's how the authors of CloudWatch designed it.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html
The best examples of this is here:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CountOccurrencesExample.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/ExtractBytesExample.html
Not sure if this is going to work for what I'm trying to do with logs, but it's a start. And it's the closest thing to a proper answer. If you want it done right, you gotta do it yo'self.

How to display a status depending on the data flow position

Consider for example this modified Simple TCP sample program:
How can I display the current state of the program like
Wait for Connection
Connected
Connection terminated
on the frontpanel, depending on where the "data flow" currently is.
The easiest way to do this is to place a string indicator on your front panel and write messages to a local variable of this indicator at each point where you want to see a status update.
You need to keep in mind how LabVIEW dataflow works: code will execute as soon as the data it depends on becomes available. Sometimes you can use existing structures to enforce this - for example, if you put a string constant inside your loop and wire it to a local variable terminal outside the loop, the write will only happen after the loop exits. Sometimes you may need to enforce that dataflow artificially, for example by placing your operation inside a sequence frame and connecting a wire to the border of the sequence: then what's inside the sequence will only happen after data arrives on that wire. (This is about the only thing you should use a sequence for!)
This method is not guaranteed to be deterministic, but it's usually good enough for giving a simple status indication to the user.
A better version of the above would be to send the status messages on a queue or notifier which you read, and update the status indicator, in a separate loop. The queue and notifier write functions have error terminals which can help you to enforce sequence. A notifier is like the local variable in that you will only see the most recent update; a queue keeps all the data you write to it in the right order so would be more suitable if you want to log all the updates to a scrolling list or log file. With this solution you could add more features: for example the read loop could add a timestamp in front of each message so you could see how recent it was.
A really good solution to this general problem is to use a design pattern based on a state machine. Now your program flow is clearly organised into different states and it's very easy to add in functionality like sending a different message from each state. There are good examples and project templates for these design patterns included with recent versions of LabVIEW.
You should be able to find more information on any of the terms in bold in the LabVIEW help or on the NI website.

handling server reboot in splunk alerts

I have splunk alerts set up. however when an application server is restarted, many log entries are created which trigger these alerts. I would like to either ignore these log entries or ignore the alerts when an application server is restarted.
Short of being able to do that, is there a way to annotate the splunk timeline? that way I could annotate the timeline and when people get alerted they can open the report and see that a server reboot occurred. Other tools with timelines allow this sort of annotation.
The best way to implement 'safe work times' is by using a lookup file.
use the date_day date_hour type of fields to set the safe time, and then use the servername as the lookup field to get the data in, then use a where clause to filter out safe times.
lookup file
host safe_begin safe_end
myHost 1900 2200
Query:
.... | where date_hour!>=safe_begin AND date_hour!<=safe_end
after that, set your alert accordingly.

Is there a way in vb.net to make process start closing my program?

My program checks if there is a new version of itself. If yes it would exit and start an updater that replaces it and then restarts.
My problem is that I haven't found any info on how to make process start right after closing the actual program.
Any suggestions?
Thanks in advance
I intended to add a comment, but I'm too low in points here. The updater itself should probably contain a check to determine whether your application is running an instance, and it should contain a timeout loop that performs this check and factor the timeout following it's startup state. That way you can awaken it, and close your application. The updater should just determine your application is not running, compare versions perform the intended update operation.
a possible solution would also be to create a task via tash sceduler or cron job, starting an out of process application, like CMD.exe.. which brings me to my original comment-question: in regards to what Operating System(s) and Platform(s) is your program intended for?

how to write on event log with other than "Application" as a log name?

I'm having a problem with making my VB.NET application point to something rather than "Application" in Event Log...
I create my custom event log using the function: EventLog.CreateEventSource("My_Source_Name", "My_Log_Name")
where the first parm is the Source Name, and the second parm is the log name. This method works every time it creates the event log's source, but when I'm about to add a new entry, I'm surprised that for some sources the process write the log under my custom log, but for other sources, the log is written in Application!!! (Some times with an error at it's header)!!!!!
I need to know, what exactly is going on?? am I (somehow) following the right way?? if Yes, what are the enhancements that I need to add to my code to make it look much better?? how can I stop this from occurring again so I can have all my logs under my customized log name?? and if No, what is the right way of doing this?? and is there any other way of writing this code (even if the new code was for another solution rather than the event log)??
Thank you very much :)
"To create an event source in Windows Vista and later or Windows Server 2003, you must have administrative privileges."
http://msdn.microsoft.com/en-us/library/5zbwd3s3.aspx
On the other hand, you should have a class (or interface) in charge of logging as a vertical layer on your application. That class is the one in charge of internally write to the appropriate event source.
However, if you need something powerful I really recommend Log4Net.
http://logging.apache.org/log4net/