Set Data Limit in Loggly - loggly

Is there any way to set limit in Loggly to take only last seven days logs information, other must be automatically deleted and than Loggly not be full because it will take only few days information.
Thanks

If you are using a free account, Loggly will automatically delete the data after 7 days. You should not worry about how much data is in storage, so long as your volume is under 200 MB per day. This is measured from 00:00h UTC.

Related

Kusto Query limitation gets timeout error

I am running a Kusto Query in my Azure Diagnostics where I am querying logs of last 1 week and the query times out after 10 mins. Is there a way I can increase the timeout limits? if yes can someone please guide me the steps. I downloaded Kusto explorer but couldnt see any easy way of connecting my Azure cluster. Need help as how can i increase this timeout duration from inside Azure portal for query I am running?
It seems like 10 minutes are the max value for timeout.
https://learn.microsoft.com/en-us/azure/azure-monitor/service-limits
Query API
Category
Limit
Comments
Maximum records returned in a single query
500,000
Maximum size of data returned
~104 MB (~100 MiB)
The API returns up to 64 MB of compressed data, which translates to up to 100 MB of raw data.
Maximum query running time
10 minutes
See Timeouts for details.
Maximum request rate
200 requests per 30 seconds per Azure AD user or client IP address
See Log queries and language.
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/api/timeouts
Timeouts
Query execution times can vary widely based on:
The complexity of the query
The amount of data being analyzed
The load on the system at the time of the query
The load on the workspace at the time of the query
You may want to customize the timeout for the query.
The default timeout is 3 minutes, and the maximum timeout is 10 minutes.

How cache entry's valid period is calculated in MULE4?

If I cache a payload, how long it will be valid?
There are 2 settings in caching-strategy;
Entry TTL and
Expiration Interval.
If I want to invalidate my cached value after 8 hours, How I should set above parameters?
What is the usage for 'invalidate cache' processor?
Entry TTL is how long an entry should live in the cache. Expiration interval is how frequently the object store will check the entries to see if one entry should be deleted. In your case entryTTL should 8 hours. Be mindful of the units used for each attribute. Expiration interval is a bit more tricky. You may want to check entries much more frequently to avoid them living more than 8 hours before expiring. It may be 10 minutes, 30 minutes, 1 hour or whatever works for you.
I explained it more in my blog: https://medium.com/#adobni/configuring-an-object-store-in-mule-4-5da609e3456a

What is the maximum LIMIT DURATION in the LAG function in ASA?

I am streaming data from devices and I want to use the LAG function to identify the last value received from a particular device. The data is not streamed at a regular period and in rare cases it could be days between receiving data from a device.
Is there a maximum period for the LIMIT DURATION clause?
Is there any down-side to having long LIMIT DURATION periods?
There is no maximum period for LIMIT DURATION in the language. However it is limited by amount of data the input source can hold - e.g. 1 day is default retention policy for Event Hub (can be increased in configuration).
When job is being started, Azure Stream Analytics reads up to LIMIT DURATION amount of data from the source to make sure it has correct value for the LAG at job start time. If data volume is high, this can increase job start time.
If you need to use data that is more than several days old, it may make more sense to use it as a reference data (which can be updated at daily intervals for example)

What is the correct way to get the total count of a metric in Graphite

I am trying to fetch the total number of successful logins using Graphite's render API.
http://localhost/render?target=hitcount(stats_counts.login.success,"99years",true)&from=-99years&format=json
This query is taking too long to execute (~ 30 seconds).
Is this the correct way to fetch the total number ?
Depends on-
The amount of data that the API call has to go through to render this query.
Granularity- One point per month will be ~30 times faster than a point a day.
Physical host specs- While you query use iostat -x 1 to see if your disk is doing 100% IO.

GAE Java API Channel

In http://code.google.com/intl/es-ES/appengine/docs/quotas.html#Channel you
can read that with billing enabled the maximum channel created rate is 60
creations/minute. Does it mean that we can created only 86,400
channels/day. It's very low rate, isn't it? And if i have estimated that I
could have peaks of for example: 4,000 creations/minute... What i can do?
60 creations/minute are few creations if the channels are 1to1... Is this
correct?
My interpretation of that section is that you will NOT be able to create 4k connections per minute. Here is how I would think about it: over ANY 1-minute period, no more than 60 channels can be created. For example, you can create 60 channels at time T. Then, for the next 60 seconds you won't be able to create any. Or, you can create 30 at time T. Then, every 2 seconds, create a channel.
I believe another way to think about this is in terms of the token bucket algorithm.
Anyway, I believe you can fill out this form to request a higher limit. There is a link to that form from the docs that you linked to in your question.