Adding some custom input source to syslog-ng, and direct it to different file - syslog-ng

We use syslog-ng to record metrics. We use systemd journal for logging, we added metrics as part of logs and then filtered by adding filter in /etc/syslog-ng.conf. This worked well but for certain process but if a process spams more log, due to default log suppression rate limit imposed by systemd, we used to drop metrics as well. We don't want to modify log suppression rate limit as that might impact CPU and performance. On other hand we didn't even want to loose metrics.
Wondering if there are some way to add some custom source in syslog-ng for this use case.

Answering my own question, had to go through few places, then figured out that in syslog-ng we can add some custom source and use it for our usecase.
Added following to default /etc/syslog-ng.conf
source metrics {
unix-dgram("/run/metrics" flags(no-parse));
# We can use stream socket as well
};
destination metrics_priority_normal {
file("/var/metrics/metrics_priority_normal" template("$MSG\n"));
};
log {
source(metrics);
filter { match("MetricPriority=NORMAL") };
destination(metrics_priority_normal);
};
Syslog-ng during start up will create now unix socket at /run/metrics, and we can directly log metrics there which will direct to /var/metrics/metrics_priority_normal
Example how to create client socket:
https://man7.org/linux/man-pages/man7/unix.7.html
Server socket is taken care by syslog-ng.
One can add various other filters are well details:
https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.16/administration-guide#TOPIC-956384

Related

Configure .eds file to map channels of a CANopen Client PLC

In Order use a PLC as a Client (formerly “Slave”), one has to configure the PDO channels, since the default values of the manufacturer are often not suitable. In my case, I need the PDOs so send INT valued instead of the default UNSIGNED8 (see. Picture).
Therefore my question: What kind of workflow would you recommend, to map the CANopen Client PDO channels?
I found the following workflow suitable, however I appreciate any improvements and recommendations from your side!
Start by locating the .eds file from the manufacturer. The image show this in the B&R Automation Studio Programming Environment
Open the file in a eds. Editor. I found the free Vector CANEds Editor very useful. Delete all RxPODs and RxPDO mappings that you don’t need.
Assign the needed Data Type (e.g. INTEGER16) and Channel Name (“1 Byte In (1)”).
Add the necessary PDOs and PDO mapping from the database. (This might actually be a bug, but if you just edit the PDOs without deleting and recreating them, I always receive error messages)
Map the Date to the Channels
Don't forget to write the number of channels in the first entry (in this image: 1601sub0)
Check the eds file for Errors (press F5) and copy&paste the eds file to the original location point 1.)
Add the PLC Client device in Automation Studio and you should see the correct mappings.
(PS: I couldn't make the images smaller ... any recommendations about formating this question are welcome!)

Performance issues with large datasets

Is there any way of filtering the events in a projection associated with a read model by the aggregateId?
In the tests carried out we always receive all registered events. Is it possible to apply filters in a previous stage?
We have 100,000 aggregateId and each id has associated 15,000 events. Unable to filter by aggregateId, our projections have to iterate over all events.
So you have 100.000 aggregates with 15.000 events each.
You can use ReadModel or ViewModel:
Read Model:
Read model can be seen as a read database for your app. So if you want to store some data about each aggregate, you should insert/update row or entry in some table for each aggregate, see Hacker News example read model code.
It is important to understand that resolve read models are built on demand - on the first query. If you have a lot of events, it may take some time.
Another thing to consider - a newly created resolve app is configured to use in-memory database for read models, so on each app start you will have it rebuilt.
If you have a lot of events, and don't want to wait to wait for read models to build each time you start the app, you have to configure a real database storage for your read models.
Configiuring adapters is not well documented, we'll fix this. Here is what you need to write in the relevant config file for mongoDB:
readModelAdapters: [
{
name: 'default',
module: 'resolve-readmodel-mongo',
options: {
url: 'mongodb://127.0.0.1:27017/MyDatabaseName',
}
}
]
Since you have a database engine, you can use it for an event store too:
storageAdapter: {
module: 'resolve-storage-mongo',
options: {
url: 'mongodb://127.0.0.1:27017/MyDatabaseName',
collectionName: 'Events'
}
}
ViewModel
ViewModel is built on the fly during the query. It does not require a storage, but it reads all events for the given aggregateId.
reSolve view models are using snapshots. So if you have 15.000 events for a give aggregate, then on the first request all those events will be applied to calculate a vies state for the first time. After this, this state will be saved, and all subsequent requests will read a snapshot and all later events. By default snapshot is done per 100 events. So on the second query reSolve would read a snapshot for this view model, and apply not more than 100 events to it.
Again, keep in mind, that if you want snapshot storage to be persistent, you should configure a snapshot adapter:
snapshotAdapter: {
module: 'resolve-snapshot-lite',
options: {
pathToFile: 'path/to/file',
bucketSize: 100
}
}
ViewModel has one more benefit - if you use resolve-redux middleware on the client, it will be kept up-to-date there, reactively applying events that app is receiving via websockets.

JMeter and data test visualization

I'm a novice in JMeter's world and I'm trying to get graphs with only the data used in the test, no JMeter's metrics needed.
My test case consists in many sensors sending information to a central point, which has to process this info and send a response to a consumer.
The group of sensor are a group of threads where every single sensor has it's own csv data file. The consumer is an AMQP Consumer.
I would like to save in cvs files the next:
One file for the information sent by the every sensor with the timestamp(one file->one sensor).
One file containing all consumer's responses.
By now, I have mess with Aggregated Report and sample_variables declared in user.properties file. In this way, Jmeter includes the variables declared in user.properties in every report.
Does JMeter fits for my needs?
You can precisely control what JMeter stores in .jtl results file by amending relevant Results File Configuration, for example the following entries in user.properties file will suppress all JMeter metrics and leave only timestamps:
jmeter.save.saveservice.assertion_results_failure_message=false
jmeter.save.saveservice.data_type=falsejmeter.save.saveservice.label=false
jmeter.save.saveservice.response_code=false
jmeter.save.saveservice.response_message=false
jmeter.save.saveservice.successful=false
jmeter.save.saveservice.thread_name=false
jmeter.save.saveservice.time=false
jmeter.save.saveservice.assertions=false
jmeter.save.saveservice.latency=false
jmeter.save.saveservice.connect_time=false
jmeter.save.saveservice.bytes=false
jmeter.save.saveservice.sent_bytes=false
jmeter.save.saveservice.idle_time=false
jmeter.save.saveservice.print_field_names=false
jmeter.save.saveservice.thread_counts=false
The same can be done using -J command-line option like:
jmeter -Jjmeter.save.saveservice.assertion_results_failure_message=false -Jjmeter.save.saveservice.data_type=false -Jjmeter.save.saveservice.label=false -Jjmeter.save.saveservice.response_code=false -Jjmeter.save.saveservice.response_message=false -Jjmeter.save.saveservice.successful=false -Jjmeter.save.saveservice.thread_name=false -Jjmeter.save.saveservice.time=false -Jjmeter.save.saveservice.assertions=false -Jjmeter.save.saveservice.latency=false -Jjmeter.save.saveservice.connect_time=false -Jjmeter.save.saveservice.bytes=false -Jjmeter.save.saveservice.sent_bytes=false -Jjmeter.save.saveservice.idle_time=false -Jjmeter.save.saveservice.print_field_names=false -Jjmeter.save.saveservice.thread_counts=false -n -t test.jmx -l result.jtl
In order to create a separate result file per request you can use Flexible File Writer listener which allows storing arbitrary metrics. You will need to add Flexible File Writer as a child of each Sampler which response you would like to store. Flexible File Writer can be installed using JMeter Plugins Manager
Like Dmitri T said, it is not possible to create charts for custom data in current JMeter version.

Is it possible to write a plugin for Glimpse's existing SQL tab?

Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).

Capturing output messages during flyway migration

I would like to capture the messages logged by SQLServerDbSupport and DBMigrate during a migration. Calling flyway.migrate does the migration, but it is not always obvious what actions were applied. I am hoping to capture this to determine what changes, if any were applied.
I already tried setting STDOUT to a ByteArrayOutputStream but that didn't work, presumably as the logger is initialised before the re-direction.
What other options are there to obtain the output messages ?
All you have to do is configure whatever logging framework you use to achieve this. No need to reassign stdout.
While this suggestion is great I am not sure how it addresses the need to capture the output from one migration only, while another migration is running. Do you have an example of a logger configuration which deal with individual migration in a concurrent scenario /