How to control longhorn deployments logging level? - amazon-eks

Currently our longhorn-system generating more logs, contains all logging level and export it to ELK stack, causing increase size of memory. We want to setup logging level manually to control the longhorn logs.

Related

Limitng logfile size and archiving

I'm using express app and I need to save whatever logs it is producing. Any logging middleware will work (winston, simple-node-logger etc). But there are strict requirements: A log file should not exceed 50mb. When it does reach this size it should be zipped and stored for history data. Only 20 log zips must exist at a time. Simply logging into a file and limiting it's size is easy enough by just setting up winston config. But how do I set up size monitoring and zipping feature AND limit the amount of history logs? All of this has to work simultaniously with express app running. Thanks!

ChronicleMap Recovery with multi process application

We are evaluating ChronicleMap and our application runs cluster mode with nodes ranging from 5 to 45. The plan is to have the ChronicleMap persisted in shared NFS folder so that all the nodes can read/write.
There are more likely chance that individual nodes could go down for various reasons in the middle of a read/write operation with this said. I have some questions
If node-1 goes down during a write operation, can another healthy node-2 in the cluster still continue to read/write to the files?
Lets say we implement some logic to detect a server crash and call the .recoverPersistedTo() on restart. Will this cause any issues while other healthy nodes in the cluster are reading/writing to the files? The reason I ask this question is that the document says
“You must ensure that no other process is accessing the Chronicle Map
store when calling .recoverPersistedTo()”
I have read that using .recoverPersistedTo() in place is createPersistedTo() is not a good practice, but what are the downsides?
First of all, we (Chronicle) don't support putting Chronicle Map files on NFS (as we use memory mapping and NFS is known to cause problems with it). Additionally, trying to use recovery on NFS will cause data corruption as there's no adequate file locking on NFS, and recovery tries to lock the file to prevent simultaneous recovery by multiple processes. In general, open source Chronicle Map is supposed to be used by multiple processes on the same host.
The solution to your problem is commercial Map Enterprise which supports map replication between nodes, please contact sales#chronicle.software for details.

Two Web Jobs Appear to be locking each other

Could someone help me to discover what is going on with our App Services. We have two App services connected to two Blob Storage containers that are triggered when an item is placed on the container they are listening to.
App One App two (under the same subscription)
| |
WebJobs(9) WebJobs(9)
| |
Container one Container Two (under the same storage account)
This represents environments so App One is our dev environment and App two is our Test environment. Each item that is placed into each of the containers triggers a webjob in its App Service. there is also an archive container under the storage account for each App Service where a copy of the Blob is archived.
the situation we are in is that we seem to be unable to run both WebJobs at the same time (1 of the 9 in each). We can only get a trigger activating in one WebJob when the WebJob in the other App Service is stopped. They appear to be locking each other out but I was under the impression that the structure we have would keep all of that separate and the locks would not interfere with each other. the info I can find is that reading a Blob gets a lock on the Blob and updating a Blob gets a lock on the container. If that is correct then why do they appear to be locking each other out.
Any advice on what may be causing this or how to move forward in trouble shooting it wil be greatly appreciated.
This problem seems to be related to your WebJobs functions logic. If WebJobs access the same resource at the same time, the WebJobs will influence each other. And then it will cause the problem. Please have a look at the conflict section.

How can the Bitronix transaction timeout value be increased in MOQUI?

While developing application in Moqui framework(1.4.1), a frustrating issue regarding the bitronix transaction timeout occurs. I am not able to understand the reason why this happens and the only solution to this is that I have to restart my system.
I would really like to know how can I rectify this problem.
The exception is like this
Setting the transaction timeout is done where the transaction is begun. This is in your code written using the Moqui tools such as a service or screen, or Java/Groovy/etc code that uses the Moqui TransactionFacade or the JTA interfaces directly.
By default Moqui screens are not run in transactions, unless you set the screen.#begin-transaction attribute to do so. Chances are your problem is in a long-running service, and by default Moqui services ARE run in transactions. Set the timeout using the service.#transaction-timeout attribute on the service that beings the transaction. By default services use a transaction already in place if there is one, so this needs to be on the outer most service where the transaction is actual begun.
For more details about services and transaction management see the Making Apps with Moqui book, available for download from moqui.org.
You may have another issue in your code and that is the socket timing out for the browser request (I see that as well in the log in your screenshot). There are some ways around this, but also some things you can't control so easily like when the browser times out. For a good UI it is also best not have your user sit and wait more than the typical 30-60s where such timeouts start hitting. Change your code to run in the background and if needed add something to your screen to monitor status and/or progress of the job.
If you are loading large files with the java -jar load command, you can use the timeout parameter to set the timeout in seconds to lets say 3600, because the default is 600 seconds.
More about the load command parameters at:
java -jar moqui.war help

How to restrict WCF logging

We've using lots of WCF services in our application and we're finding the logging is really useful but the files tend to grow fairly quickly. In fact, we can usually only play around with a service for 10 minutes or so, until the log file is more then 10mb and too slow to load.
Is there anyway to restrict the logging to only 1000 entries, or use a rolling file, etc??
You might be interested in checking out the following trace listener:
The Code Project: A Rolling XmlWriterTraceListener
Ever had the problem of growing svclog
files after configuring tracing in a
productive WCF environment? Did not
want to restart the application just
for deleting or moving the trace
files?
Then, you will like the
RollingXmlWriterTraceListener, which
is a specialized
XmlWriterTraceListener, and is
completely compatible with the WCF
tracing facility.
Congfiguring Message Logging describes how to restrict the log files to a certain size or number of entries. I found this to be a real helpful document.
http://msdn.microsoft.com/en-us/library/ms730064.aspx