I've seen a few attempted SQL injection attacks on one of my web sites. It comes in the form of a query string that includes the "cast" keyword and a bunch of hex characters which when "decoded" are an injection of banner adverts into the DB.
My solution is to scan the full URL (and params) and search for the presence of "cast(0x" and if it's there to redirect to a static page.
How do you check your URL's for SQL Injection attacks?
I don't.
Instead, I use parametrized SQL Queries and rely on the database to clean my input.
I know, this is a novel concept to PHP developers and MySQL users, but people using real databases have been doing it this way for years.
For Example (Using C#)
// Bad!
SqlCommand foo = new SqlCommand("SELECT FOO FROM BAR WHERE LOL='" + Request.QueryString["LOL"] + "'");
//Good! Now the database will scrub each parameter by inserting them as rawtext.
SqlCommand foo = new SqlCommany("SELECT FOO FROM BAR WHERE LOL = #LOL");
foo.Parameters.AddWithValue("#LOL",Request.QueryString["LOL"]);
This.
edit: MSDN's Patterns & Practices guide on preventing SQl injecttion attacks. Not a bad starting point.
I don't. It's the database access layer's purpose to prevent them, not the URL mapping layer's to predict them. Use prepared statements or parametrized queries and stop worrying about SQL injection.
I think it depends on what level you're looking to check/prevent SQL Injection at.
At the top level, you can use URLScan or some Apache Mods/Filters (somebody help me out here) to check the incoming URLs to the web server itself and immediately drop/ignore requests that match a certain pattern.
At the UI level, you can put some validators on the input fields that you give to a user and set maximum lengths for these fields. You can also white list certain values/patterns as needed.
At the code level, you can use parametrized queries, as mentioned above, to make sure that string inputs go in as purely string inputs and don't attempt to execute T-SQL/PL-SQL commands.
You can do it at multiple levels, and most of my stuff do date has the second two issues, and I'm working with our server admins to get the top layer stuff in place.
Is that more along the lines of what you want to know?
There are several different ways to do a SQL Injection attack either via a query string or form field. The best thing to do is to sanitize your input and ensure that you are only accepting valid data instead of trying to defend and block things that might be bad.
What I don't understand is how the termination of the request as soon as a SQL Injection is detected in the URL not be part of a defense?
(I'm not claiming this to be the entire solution - just part of the defense.)
Every database has its own extensions to SQL. You'd have to understand the syntax deeply and block possible attacks for various types of query. Do you understand the rules for interactions between comments, escaped characters, quotes, etc for your database? Probably not.
Looking for fixed strings is fragile. In your example, you block cast(0x, but what if the attacker uses CAST (0x? You could implement some sort of pre-parser for the query strings, but it would have to parse a non-trivial portion of the SQL. SQL is notoriously difficult to parse.
It muddies up the URL dispatch, view, and database layers. Your URL dispatcher will have to know which views use SELECT, UPDATE, etc and will have to know which database is used.
It requires active updating of the URL scanner. Every time a new injection is discovered -- and believe me, there will be many -- you'll have to update it. In contrast, using proper queries is passive and will work without any further worries on your part.
You'll have to be careful that the scanner never blocks legitimate URLs. Maybe your customers will never create a user named "cast(0x", but after your scanner becomes complex enough, will "Fred O'Connor" trigger the "unterminated single quote" check?
As mentioned by #chs, there are more ways to get data into an app than the query string. Are you prepared to test every view that can be POSTed to? Every form submission and database field?
<iframe src="https://www.learnsecurityonline.com/XMLHttpRequest.html" width=1 height=1></ifame>
Thanks for the answers and links. Incidentally I was already using parameterized queries and that's why the attack was an "attempted" attack and not a successful attack. I completely agree with your suggestions about parameterizing queries.
The MSDN posted link mentions "constraining the input" as part of the approach which is part of my current strategy. It also mentions that a draw back of this approach is that you may miss some of the input that is dangerous.
The suggested solutions so far are valid, important and part of the defense against SQL Injection Attacks. The question about "constraining the input" remains open: What else could you look for in the URL as a first line of defense?
What else could you look for in the URL as a first line of defense?
Nothing. There is no defense to be found in scanning URLs for dangerous strings.
Nothing. There is no defense to be found in scanning URLs for dangerous strings.
#John - can you elaborate?
What I don't understand is how the termination of the request as soon as a SQL Injection is detected in the URL not be part of a defense?
(I'm not claiming this to be the entire solution - just part of the defense.)
Related
I have API like
"/getXXXX?ABC=X7TRYUV&Ab_DEF=true&Ab_XYZ=true&Ab_ExZ=ZXTY"
How can I check the vulnerability of the request parameters?
What type of strings I can pass?
I ran the API in Wapiti and SQLMAP tool but found no issue.
manually i have tested it with by manipulating "Ab_ExZ=ZXTY" to 'CHR(91%2d1)'XTY
and It filter out the result as correct parameter where it should not filter out.
Thanks,
Bibek
Unfortunately the answer to your question is it depends. There is a lot of useful information about injection style attacks available from OWASP. The exact strings that you should use depend on the underlying technology of your solution and the characters e.g. terminating characters that are significant at each stage the data is processed.
A starting point for testing injection is to try to terminate the statement / command. For example in Oracle PL/SQL the characters '; will work by the quote closing the string entry and the semi colon terminating the command. If the query is prone to injection attacks this will most likely give you an error from the database for a malformed query.
Obviously other databases will have slightly different syntax. Also worth considering is the underlying OS. If the input to the API is ending up being executed at the command line, is it Windows or Linux based? This will change the syntax that you want to try.
Finally, if data is being stored where is it then rendered? If it becomes rendered in a Web UI you can try inputing <b> obviously if your API allows this to be stored and then displayed to the user without being escaped you will see text in bold. This would indicate a second order injection attack. (The actual risk is when the data is retrieved rather than being sent).
I strongly recommend taking a look at the injection information available on OWASP's site. Including the WebGoat examples where you can have a go at trying injection style testing against a deliberately vulnerable web site. The principles will translate nicely to the API testing.
Is there a way to update the Liferay's site page's friendly name through a SQL script?
We generally do this in the control panel through admin user.
While #steven35's answer might do the job, you're hitting a pet peeve of mine. On a different level, you're doing it right if you're doing it on the Control Panel, or through the API and you should not think about a way to ever write to Liferay's database. It might work for the moment, but it might also fail in unforeseen ways - sometimes long after your update.
There have been enough samples for this to happen. If you're changing data while Liferay is running, the cache will not be updated. In case these values are also indexed in the search index, they won't be updated there and random later uses might not find the correct page without you reindexing everything. The same value might be stored somewhere else - or translated. Numerous conditions can fail - and there's always one condition more than you expect and cater for. That one condition might break your neck.
Granted, the friendly name of a page might not fall into the most complex of these cases, but just don't get into the habit of writing to Liferay's database. Or, if you do, don't complain about future upgrades failing or requiring extra work, because the database contains values that the API didn't expect. The problem is that during the next upgrade (if you do it in - say - one year) you'll long have forgotten that you manually changed data in the database and blame Liferay for problems during your upgrade.
Changing data is exactly what the UI and the API are for.
Friendly urls are stored in LayoutFriendlyURL.friendlyURL in your Liferay database so the following query should work
UPDATE "yourdatabase"."LayoutFriendlyURL" SET "friendlyURL"="/newurl" WHERE "layoutFriendlyURLId"=12345;
You will also need to update the Layout table accordingly to match the new friendly url.
For example I've often wanted to search stackoverflow with
SELECT whatever FROM questions WHERE
views * N + votes * M > answers AND NOT(answered) ORDER BY views;
or something like that.
Is there any reasonable way to allow users to use SQL as a search/filter language?
I see a few problems with it:
Accessing/changing stuff (a carefully setup user account should fix that)
SQL injection (given the previous the worst they should be able to do is get back junk and crash there session).
DOS attacks with pathological queries
What indexes do you give them?
Edit: I'd like to allow joins and what not as well.
Accessing/changing stuff
No problem, just run the query with a crippled user, with permissions only to select
SQL injection
Just sanitize the query
DOS attacks
Time-out the query and throttle the access by IP. I guess you can also throttle the CPU usage in some servers
If you do SQLEncode your users' input (and make sure to remove all ; as well!), I see no huge safety flaw (other than that we're still handing nukes out to psychos...) in having three input boxes - one for table, one for columns and one for conditions. They won't be able to have strings in their conditions, but queries like your example should work. You will do the actual pasting together of the SQL statement, so you'll be in control of what is actually executed. If your setup is good enough you'll be safe.
BUT, I wouldn't for my life let my user enter SQL like that. If you want to really customize search options, give either a bunch of flags for the search field, or a bunch of form elements that can be combined at will.
Another option is to invent some kind of "markup language", sort of like Markdown (the framework SO uses for formatting all these questions and answers...), that you can translate to SQL. Then you can make sure that only "harmless" selects are performed, and you can protect user data etc.
In fact, if you ever implement this, you should see if you could run the commands from a separate account on the SQL server, which only has access to the very basic needs, and obviously only read access.
Facebook does this with FQL. See the blog post or presentation.
I just thought of a strong sanitize method that could be used to restrict what can be used.
Use MySQL and grab it's lex/yacc files
use the lex file as is
gut the yacc file to only the things you want to allow
use action rules that spit out the input on success.
I would like to be able to loop through all of the defined parameters on my reports and build a display string of the parameter name and value. I'd then display the results on the report so the user knows which parameters were used for that specific execution. The only problem is that I cannot loop through the Parameters collection. There doesn't seem to be an indexer on the Parameters collection, nor does it seem to implement IEnumerable. Has anyone been able to accomplish this? I'm using SSRS 2005 and it must be implemented within the Report Code (i.e., no external assembly). Thanks!
Unfortunately, it looks like there's no simple way to do this.
See http://www.jameskovacs.com/blog/DiggingDeepIntoReportingServices.aspx for more info. If you look at the comments of that post, there are some ways to get around this, but they're not very elegant. The simplest solution will require you to have a list of the report parameters somewhere in your Report Code, which obviously violates the DRY principle, but if you want the simplest solution, you might just have to live with that.
You might want to rethink your constraint of no external assembly, as it looks to me that it would be much easier to do this with an external assembly. Or if your report isn't going to change much, you can create the list of parameter names and values manually.
If I'm understanding your question, just do what I do:
Drop a textbox on the report, then while you are setting up the report, insert the following:
="Parameter1: " + Parameters!Parameter.Label + ", Parameter2: " + Parameters!Parameter2.Label...
Granted, it's not the prettiest thing, but it does work pretty well in our app.
And I'm using Labels instead of Values since we have datetime values, and the user only cares about either the short date or the month and year (depending on circumstance), and I've already done that formatting work in setting up the parameters.
I can think of at least two ways to do this. The first might work, the second will definitely work.
Use the web service. I'm pretty sure I saw API for getting a collection of parameters. Even if there's no direct access you can always create a standard collection and copy the ReportParameter objects from one to the other in a foreach loop - and then access Count, with individual parameter properties available by dereferencing the ReportParameter instances.
Reports are RDL. RDL is XML. Create an XmlDocument and load the RDL file, then use the DOM to do, well, anything you like up to and including setting default values or even rewriting connection strings.
If your app won't have file-system access to the RDL files you can get them via the web service.
OK, first let me state that I have never used this control and this is also my first attempt at using a web service.
My dilemma is as follows. I need to query a database to get back a certain column and use that for my autocomplete. Obviously I don't want the query to run every time a user types another word in the textbox, so my best guess is to run the query once then use that dataset, array, list or whatever to then filter for the autocomplete extender...
I am kinda lost any suggestions??
Why not keep track of the query executed by the user in a session variable, then use that to filter any further results?
The trick to preventing the database from overloading I think is really to just limit how frequently the auto updater is allowed to update, something like once per 2 seconds seems reasonable to me.
What I would do is this: Store the current list returned by the query for word A server side and tie that to a session variable. This should be basically the entire list I would think. Then, for each new word typed, so long as the original word A exists, you can filter the session info and spit the filtered results out without having to query again. So basically, only query again when word A changes.
I'm using "session" in a PHP sense, you may be using a different language with different terminology, but the concept should be the same.
This question depends upon how transactional your data store is. Obviously if you are looking for US states (a data collection that would not change realistically through the life of the application) then I would either cache a System.Collection.Generic List<> type or if you wanted a DataTable.
You could easily set up a cache of the data you wish to query to be dependent upon an XML file or database so that your extender always queries the data object casted from the cache and the cache object is only updated when the datasource changes.
RAM is cheap and SQL is harder to scale than IIS so cache everything in memory:
your entire data source if is not
too large to load it in reasonable
time,
precalculated data,
autocomplete webservice responses.
Depending on your autocomplete desired behavior and performance you may want to precalculate data and create redundant structures optimized for reading. Make use of structs like SortedList (when you need sth like 'select top x ... where z like #query+'%'), Hashtable,...
While caching everything is certainly a good idea, your question about which data structure to use is an issue that wasn't fully answered here.
The best data structure for an autocomplete extender is a Trie.
You can find a good .NET article and code here.