How many rules elastalert can load? Limit of the rules for ElastAlert - elastalert

Loading how many rule files is advisable with a single node elastalert?
What is the hard limit for setting up the rules ?
I have gone through following link but didnt get any answer
https://gitter.im/Yelp/elastalert?at=56de6014b0cc3f1b4150f00e
Can I load around 1000 alert rules in a single go?
how will it perform if no of config files will increase upto 1000 ?

Related

TYPO3 10.4.14 increase upload limit

I can't upload files over 2 MB.
I have seen various procedures in various forums, which unfortunately cannot be implemented in TYPO3 10.4.14.
Does anyone know how I can increase the file limit of 2MB?
Many thanks in advance
It's not due to TYPO3 or a setting of TYPO3.
This 2MB limit is the default value for MAX_FILE_SIZE in PHP. So you (or your hoster) have to increase this limit to contemporary values.
https://www.php.net/manual/en/features.file-upload.common-pitfalls.php
In former times, there was a setting $GLOBALS['TYPO3_CONF_VARS']['BE']['maxFileSize'] with a default value of 10MB. This setting has been removed in TYPO3 v7.6.0 for keeping TYPO3 in line with PHP settings. (https://forge.typo3.org/issues/71110)

Prevent URL skipping when Bulk extracting with import.io

So, I've been extracting lot of data with import.io desktop app for quite some time; but what always bugged me is when you try to bulk extract multiple URLs it always skips around half of them.
It's not URL problem, if you take same let's say 15 URLs it will return for example first time 8, second time 7, third time 9; some links will be extracted first time but will be skipped second time and so on.
I am wondering is there a way to make it process all URL I feed it?
I have encountered this issue a few times when I am extracting data. This typically is due to the speed of the Bulk Extract requesting URLs from the site's servers.
A workaround is to use a Crawler like an Extractor. You can paste the URLs that you created/collected into the Where to Start, Where to Crawl, and Where to Get Data From sections (you need to click on the advanced settings button in the Crawler).
Make sure to turn on 0 depth Crawl. (This turns the Crawler into an Extractor; i.e. no discovery of additional URLs)
Increase the Pause Between Pages.
Here is screenshot of one I built sometime ago.
http://i.gyazo.com/92de3b7c7fbca2bc4830c27aefd7cba4.png

prestashop 1.6 backoffice consuming to much bandwidth

my prestashop 1.6 website is consuming to much bandwidth of my server. I track down the problem and it looks like it is the back office that is causing it. how can i fix this? do i have to disable any module?
This depends on a whole lot of things. First off, how much bandwidth is it taking? Do you have a quota? With PrestaShop, combinations can sometimes take up a lot of resources. Do you have many combinations? Another option is that it is coming from third party modules.
On your server, open the /config/defines.inc.php file.
Find this line (around line 43): define('_PS_DEBUG_PROFILING_', false);
In this line, change "false" to "true".
Save your changes.
Now load any page of your store, either front-end or back-end, and it will display statistics at the bottom of the page. That should tell you where your server resources are allotted

Determining all required DNS Queries to show a website

I need to create a list of all DNS Queries required to display a large number of sites (ideally up to 1 000 000). The list needs to assign the queries to the page that required them.
Example: Visiting google.com required a DNS query for google.com, ssl.gstatic.com, apis.google.com and other sites. My List would read something along the lines of
google.com:google.com,ssl.gstatic.com,apis.google.com,...
(exact format not relevant here)
I currently have two ideas on how to do this:
Set up a DNS Server with logging, build a script that visits a given list of domains using my DNS Server as a resolver
Building a script that loads the source code of the site (think python's urllib2, for example), parsing all embedded content and constructing a list of queries that would be needed
Both ideas have problems though. Visiting 1 000 000 Domains with a space of 2 seconds between visits (to make it possible to assign queries to the visited site afterwards), taking about 1 second to load (which is pretty optimistic) would take over 34 days, probably longer. But to build a parser I would need a complete list of all possible forms of embedded content that would result in a DNS Query, and I would need to query some of the target URLs as well (think iframes), and some content would be impossible to check for further queries (think flash content which connects to other servers).
I'm kind of stuck here, and would appreciate some input on how to deal with this. It would be possible to shorten the List of URLs to maybe 100 000, but any less would dramatically reduce the use of the result.
For context: I need this list for my bachelor thesis dealing with a attack strategy on a proposed DNS privacy extension.
You can use PhantomJS to do this, as it provides an interface that will let you capture network requests and log them, something along the lines of this example.
You'd need to write some simple Javascript, but as it's Node, it should be fairly easy to run this asynchronously to gather the data you need within a reasonable time.
There is a tool that can do this and produce a graphic representation. It is part of dnssec-tools called DNSpktflow (DNS Packet Flow)
It may not do what you want exactly but it is open source so you can see how they do it.

PHP $_POST / $_FILES empty when upload larger than POST_MAX_SIZE [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to detect if a user uploaded a file larger than post_max_size?
I'm writing a script that handles file uploads from a web application. I've got a set limit on the size of files that may be uploaded to my application (storage space limitations). I'm currently trying to put some validation code in that will check to make sure the user actually uploaded a file, so that I can display a nice error message to them. But I'd also like to be able to display an error message to the user if they've uploaded a file that's too big. I can use Javascript for this, but I'd like a PHP check as well in case they don't have Javascript enabled.
I've set my POST_MAX_SIZE var in PHP.ini to be the maximum file upload size, but this has produced an unexpected issue. If someone tries to upload a file larger than the POST_MAX_SIZE, the binary data just gets truncated at the max size, and the $_FILES array doesn't contain an entry for that file. This is the same behavior that would occur if the user didn't submit a file at all.
This makes it difficult to tell why the $_FILES array doesn't contain a file, i.e. whether it wasn't ever uploaded, or whether it was too big to send completely.
Is there a way to distinguish between these two cases? In other words, is there a way to tell whether POST data was sent for a file, but was truncated prematurely before the entire file was sent?
Odd as it may seem, this is intentional behavior, as POST_MAX_SIZE is a low level ultimate failsafe, and to protect you and prevent DOS attacks, there's no way the server can do anything but discard all POST data when it realizes, mid-stream, that it's receiving more data than it can safely handle. You can raise this value if you know you need to receive more data than this at once (but be sure your server can handle the increased load this will put on it) but I'd suggest looking into other ways of handling your use case, hitting up against POST_MAX_SIZE suggests to me that there might be more robust solutions than one massive HTTP POST, such as splitting it up into multiple AJAX calls, for instance.
Separate from POST_MAX_SIZE there is UPLOAD_MAX_SIZE which is the php.ini setting for a single file limit, which is what I assumed you were talking about initially. It limits the size of any one uploaded file, and if a file exceeds this value, it will set $_FILES['file']['error'] to 1. Generally speaking, you want to have your site set up like this:
The <form> MAX_FILE_SIZE should be set to the maximum you actually want to accept for this form. While any user attempting to exploit your site can get around this, it's nice for users actually using your site, as the browser will (actually, could) prevent them from wasting the bandwidth attempting to upload it. This should always be smaller than your server-side settings.
UPLOAD_MAX_FILESIZE is the maximum size the server will accept, discarding anything larger and reporting the error to the $_FILES array. This should be larger than the largest file you want to actually accept throughout your site.
POST_MAX_SIZE is the maximum amount of data your server is willing to accept in a single POST request. This must be bigger than UPLOAD_MAX_SIZE in order for large uploads to succeed, and must be much bigger to allow more than one file upload at a time. I might suggest a value of UPLOAD_MAX_FILESIZE * 4.1 - this will allow four large files at a time, along with a little extra data. YMMV of course, and you should ensure your server can properly handle whatever values you decide to set.
To your specific question of How to tell, PHP documentation on POST_MAX_SIZE I linked to suggested setting a get variable in the form, i.e.
<form action="edit.php?processed=1">
However like I said above, if you're running into this issue, you may want to explore alternative upload methods.
Something like this:
if ($_SERVER['CONTENT_LENGTH'] && !$_FILES && !$_POST) {
// upload failed
}
Untested, so play around with the various scenarios to see what combination works. Not sure if it works with multiple file uploads at the same time.
You may need to inspect $_SERVER['CONTENT_LENGTH'] and compare it to the sum of files received if dealing with multiple uploads.