Datatables server side processing draw parameter - datatables

In the datatable draw parameter documentation said
is strongly recommended for security reasons that you cast this parameter to an integer, rather than simply echoing back to the client what it sent in the draw parameter, in order to prevent Cross Site Scripting (XSS) attacks
How can cast a parameter to int can help to prevent Cross Site Scripting.?

You aren't supposed to do anything with it :-). On the client-side it is dealt with automatically by DataTables. On the server-side all you do is cast as int, and send it back. This example shows basic initialisation of server-side processing:
http://datatables.net/examples/data_sources/server_side.html
and for other attack DataTables indicates two ways to prevent an attack.
Prevention
There are two options to stop this type of attack from being successful in your application:
Disallow any harmful data from being submitted
Encode all untrusted output using a rendering function.
For the first option your server-side script would actively block all data writes (i.e. input) that contain harmful data. You could elect to simply disallow all data that contains any HTML, or use an HTML parser to allow "safe" tags. It is strongly recommended that you use a known and proven security library if you take this approach - do not write your own!
The second option to use a rendering function will protect against attacks when displaying the data (i.e. output). DataTables has two built in rendering functions that can be used to prevent against XSS attacks; $.fn.dataTable.render.text and $.fn.dataTable.render.number.
More Information: https://www.datatables.net/manual/security

Related

Response.BinaryWrite(TEXT) is causing Fortify/Checkmarx XSS error

This is a pretty straightforward situation. We have a database table that has a VARBINARY(MAX) field, this field contains a text file. On the .NET side the user can download the text file from the database. It's just plain text and coming from a trusted source. However, fortify/checkmarx complains about Stored XSS. The code is pretty straightforward.
Response.Clear()
Response.ContentType = "text/plain"
Response.AddHeader("content-disposition, $"attachment;filename=FileToDownload.txt")
Response.BinaryWrite(datafromDB)
Response.[End]()
The vulnerability scan points to the Response.BinaryWrite() and complains of Stored XSS, of course this is silly considering it's coming from a trusted source. However, I want to find a way to remediate this Is there a way to filter out the "datafromDB" object or sanitize this before it hits the response.BinaryWrite.
If the response is text, escaping it should not change the content to a form that would be potentially exploitable. You should escape data before it is sent to the client, here is why:
"Trusted data" does not exist. You are not considering:
A disgruntled employee may change the data in the DB and inject exploitable content.
There may be more than one path for data to get into the database. Not all of those paths may perform sanitization on the data before it is written to the DB. It is often common to have applications where data is imported into the back end database via an off-line mechanism.
The vulnerability analysis is not going to consider the content-type since setting the content type is essentially "control flow" analysis. The current vulnerability analysis technique (Data flow analysis) looks at the path of the data from source (DB) to sink (response stream) to recognize the exploit pattern. Setting the content type is not on that data flow path; even if it was, the static string content is not evaluated for how it affects state of a response object because it is a control flow change.
If you mark it as "Not Exploitable" as-is, it is true for the code in the current state that it is not-exploitable. However, if someone comes along later and changes the encoding value without changing the code involved in the data flow, the NE marking is maintained. You will therefore have an undetected "False Negative" because the control flow changed that now made the code exploitable.
Reliance on "compensating controls" such as setting response headers or relying on deployment configuration/environment controls works until it doesn't. Eventually, someone may decide to make a change that removes a compensating control (e.g. changing the response content type in this case) that was used to justify marking something Not Exploitable rather than remediating the issue properly. It then becomes a security hole that may be sitting in the open waiting for someone to find and exploit.
I'm all for relying on compensating controls, but with that comes the need to ensure changes to compensating controls are detected. It is often easier to implement the proper remediation rather than add more automation around detecting changes to compensating controls. The software implementation is the last line of defense when all else fails.

Modsecurity create config file with rules for specific URL

I'm starting to learn about ModSecurity and rule creation so say I know a page in an web app is vulnerable to cross site scripting. For argument's sake lets say page /blah.html is prone to XSS.
Would the rule in question look something like this?:
SecRule ARGS|REQUEST_URI "blah" REQUEST_HEADERS “#rx <script>” id:101,msg: ‘XSS Attack’,severity:ERROR,deny,status:404
Is it possible to create a config file for that particular page (or even wise to do so?) or better said is it possible to create rules for particular URL's?
Not quite right as a few things wrong with this rule as it's written now. But think you get the general concept.
To explain what's wrong with the rule as it currently stands takes a fair bit of explanation:
First up, ModSecurity syntax for defining rules is made up of several parts:
SecRule
field or fields to check
values to check those fields for
actions to take
You have two sets of parts 2 & 3 which is not valid. If you want to check 2 things you need to chain two rules together, which I'll show an example of later.
Next up you are checking the Request Headers for the script tag. This is not where most XSS attacks exist and you should be checking the arguments instead - though see below for further discussion on XSS.
Also checking for <script> is not very thorough. It would easily be defeated by <script type="text/javascript"> for example. Or <SCRIPT> (should add a t:lowercase to ignore case). Or by escaping characters that might be processed by your parts of your application.
Moving on, there is no need to specify the #rx operator as that's implied if no other operator is given. While no harm in being explicitly it's a bit odd you didn't give it for blah but did for the <script> bit.
It's also advisable to specify a phase rather than use default (usually phase 2). In this case you'd want phase 2 which is when all Request Headers and Request Body pieces are available for processing, so default is probably correct but best to be explicit.
And finally 404 is "page not found" response code. A 500 or 503 might be a better response here.
So your rule would be better rewritten as:
SecRule REQUEST_URI "/blah.html" id:101,phase:2,msg: ‘XSS Attack’,severity:ERROR,deny,status:500,chain
SecRule ARGS "<script" "t:lowercase"
Though this still doesn't address all the ways that the basic check you are doing for a script tag can be worked around, as mentioned above. The OWASP Core Rule Set is a free ModSecurity set of rules that has been built up over time and has some XSS rules in it that you can check out. Though be warned some of their regexprs get quite complex to follow!
ModSecurity also works better as a more generic check, so it's unusual to want to protect just one page like this (though not completely unheard of). If you know one page is vulnerable to a particular attack then it's often better to code that page, or how it's processed, to fix the vulnerability, rather than using ModSecurity to handle it. Fix the problem at source rather than patching round it, is always a good mantra to follow where possible. You would do that by sanitising and escaping input HTML code from inputs for example.
Saying that it is often a good short term solution to use a ModSecurity rule to get a quick fix in while you work on the more correct longer term fix - this is called "virtual patching". Though sometimes they have a tendency to become the long term solutions then as no one gets time to fix the core problem.
If however you wanted a more generic check for "script" in any arguments for any page, then that's what ModSecurity is more often used for. This helps add extra protection on what already should be there in a properly coded app, as a back up and extra line of defence. For example in case someone forgets to protect one page out of many by sanitising user inputs.
So it might be best dropping the first part of this rule and just having the following to check all pages:
SecRule ARGS "<script" id:101,phase:2,msg: ‘XSS Attack’,severity:ERROR,deny,status:500,"t:lowercase"
Finally XSS is quite a complex subject. Here I've assumed you want to check parameters sent when requesting a page. So if it uses user input to construct the page and displays the input, then you want to protect that - known as "reflective XSS." There are other XSS vulnerabilities though. For example:
If bad data is stored in a database and used to construct a page. Known as "stored XSS". To address this you might want to check the results returned from the page in the RESPONSE_BODY parameter in phase 4, rather than the inputs sent to the page in the ARGS parameter in phase 2. Though processing response pages is typically slower and more resource intensive compared to requests which are usually much smaller.
You might be able to execute a XSS without going through your server e.g. If loading external content like a third party commenting system. Or page is served over http and manipulated between server and cling. This is known as "DOM-based XSS" and as ModSecurity is on your server then it cannot protect against these types of XSS attacks.
Gone into quite a lot of detail there but hope that helps explain a little more. I found the ModSecurity Handbook the best resource for teaching yourself ModSecurity despite its age.

dojo load js script and then execute it

I am trying to load a template with xhr and then append it to the page in some div.
the problem is that the page loads the script but doesn't execute it.
the only solution I got is to add some flags in the page (say: "Splitter"), before the splitter, I put the js code, and after the splitter I add the html code, and when getting the template by ajax, I split it. here is an example:
the data I request by ajax is:
//js code:
work_types = <?php echo $work_types; ?>; //json data
<!-- Splitter -->
html code:
<div id="work_types_container"></div>
so the callback returns 'data' which I simply split and exeute like this:
data = data.split("<!-- Splitter -->");
dojo.query("#some_div").append(data[1]); //html part
eval(data[0]); //js part
Although this works for me, but it doesn't seem so professional!
is there another way in dojo to make it work?
If you're using Dojo, it might be worth to look at the dojox/layout/ContentPane module (reference guide). It's quite similar to the dijit/layout/ContentPane variant but with one special extension, that it allows executing the JavaScript on that page (using eval()).
So if you don't want to do all that work by yourself, you could do something like:
<div data-dojo-type="dojox/layout/ContentPane" data-dojo-props="href: myXhrUrl, executeScripts: true"></div>
If you're concerned about it being a DojoX module (DojoX will disappear in Dojo 2.0), the module is labeled as maintained, so it has a higher chance of being integrated in dijit in later versions.
As an anwer to your eval() safety question (in comments). Well, it's allowed of course, else they wouldn't have such a function called eval(). But indeed, it's less secure, the reason for this is that the client in fact trusts the server and executes everything the server sends to the client.
Normally, there are no problems unless the server sends malicious content (this could be due to an issue on your server or man in the middle attacks) which will be executed and thus, causing an XSS vulnerability.
In the ideal world the server only sends data and the client interpretes this data and renders it by himself. In this design, the client only trusts data from the server, so no malicious logic can be executed (so there will be no XSS vulnerability).
It's unlikely that it will happen and the ideal world solution is not even possible in many cases since the initial page request (loading your webpage) is in fact a similar scenario where the client executes whatever the server sends.
Web application security is not about being 100% safe (it's impossible), but it's to try to create as less as possible open doors that can be used by hackers. It's up to you what you consider safe and to verify if the "ideal world" solution is possible in this specific scenario (it might not be, or it might take too much time compared to the other solution).

Validation of sender identity in websites

I came to think about this question a few days ago when I desinged an HTML form that submits data via php to an SQL database. I solved my problem, but I am asking here a computer-theoretical question, which might help me (or others) in the future.
I want to protect myself from SQL-injection, and I thought that instead of validating the data by the php on the server side, I can have the javascript validate the data on the client side (I am much more fluent in JS than in PHP) and then send it.
However, a sophisticated user might inspect the javascript (or the HTTPrequest) and then alter it to send his own vicious request to the server.
My question:
Is it theoretically possible to do a computation on the clinet side, where the code is visible to him, and have it sent with some way that ensures that the data was sent from my program and not from an altered code?
Can this be done by an RSA-scheme with public and private keys?
I want to protect myself from SQL-injection, and I thought that instead of validating the data
Don't validate data to protect yourself from SQL Injection. Validate data to make sure it is in the format you want.
Escape data to protect yourself from SQL Injection (and do that escaping via prepared statements and parameterized queries).
Is it theoretically possible to do a computation on the clinet side, where the code is visible to him, and have it sent with some way that ensures that the data was sent from my program and not from an altered code?
No. The client side code can be bypassed entirely. In this arena, it is useful only to quickly tell the user that their data would be rejected if it was submitted to the server.
Can this be done by an RSA-scheme with public and private keys?
No. You have to give one of the keys to the client. It can then be extracted and used independently of your code.

JMeter Tests and Non-Static GET/POST Parameters

What's the best strategy to use when writing JMeters tests against a web application where the values of certain query-string and post variables are going to change for each run.
Quick, common, example
You go to a Web Page
Enter some information into a form
Click Save
Behind the scenes, a new record is entered in the database
You want to edit the record you just entered, so you go to another web page. Behind the scenes it's passing the page a parameter with the Database ID of the row you just created
When you're running step 5 of the above test, the page parameter/Database ID is going to change each time.
The workflow/strategy I'm currently using is
Record a test using the above actions
Make a note of each place where a query string variable may change from run to run
Use a XPath or Regular Expression Extractor to pull the value out of a response and into a JMeter variable
Replace all appropriate instances of the hard-coded parameter with the above variable.
This works and can be automated to an extent. However, it can get tedious, is error prone, and fragile. Is there a better/commonly accepted way of handling this situation? (Or is this why most people just use JMeter to play back logs? (-;)
Sounds to me like your on the right track. The best that can be achieved by JMeter is to extract page variables with a regular expression or xpath post processor. However your absolutely correct in that this is not a scalable solution and becomes increasingly tricky to maintain or grow.
If you've reached is point then you may want to consider a tool which is more specialised for this sort of problem. Have a look web testing tool such as Watir, it will automatically handle changing post parameters; but you would still need to extract parameters if you need to do a database update but using Watir allows for better code reuse making the problem less painful.
We have had great success in testing similar scenarios with JMeter by storing parameters in JMeter Variables within a JDBC assertion. We then do our http get/post and use a BSF Assertion and javascript do complex validation of the response. Hope it helps