Situation:
I'm doing some penetration testing for a friend of mine and have total clearance to go postal on a demo environment. Reason for this is because I saw a XSS-hole in his online ASP-application (error page with error as param allowing html).
He has a Access DB and because of his lack of input-validation I came upon another hole: he allows sql injection in a where-clause.
I tried some stuff from:
http://www.krazl.com/blog/?p=3
But this gave limited result:
MSysRelationships is open, but his Objects table is shielded.
' UNION SELECT 1,1,1,1,1,1,1,1,1,1 FROM MSysRelationships WHERE '1' = '1 <-- worked so I know the parent table has at least 9 columns. I don't know how I can exploit the relation table to get tablenames ( I can't find any structures explanation so I don't know on what to select.
Tried brute-forceing some tablenames, but to no avail.
I do not want to trash his DB, but I do want to point out the serious flaw with some backing.
Anyone has Ideas?
Usually there are two ways to proceed from here. You could try to guess table names by the type of data which is stored in them which often works ("users" usually stores the user data ...). The other method would be to generate speaking error messages in the application to see if you can fetch table or column names from there.
Related
I am creating a php page for a team of application testers who need to frequently see the content of a oracle database as part of the testing process. The page takes the sql query from a text area and uses oci8 libraries to execute it against the database.
However as the command describe (short desc) is a feature of Oracle SQL*Plus, I am trying to emulate it's functionality inside a sql query. Here's what I've come up with until now-
SELECT column_name "Name",
CASE WHEN nullable = 'Y' THEN 'NULL'
WHEN nullable = 'N' THEN 'NOT NULL'
END AS "Null",
concat(concat(concat(data_type,'('),data_length),')') "Type"
FROM all_tab_columns
WHERE table_name='{$TABLE}'
This seems to be working for most of the tables, but not for "v$database" or "v$instance". I understand that the references to these system views are not present inside "all_tab_columns" and the users will not be searching for them, but I want the query to work for all tables and views just for the sake of completeness.
So if anyone can suggest a better way, please guide me.
You can use dbms_sql.describe_columns2 to get the same information as describe does.
It needs quite some effort to get it actually working: you have to parse the statement and get the metadata from it. The nice thing is: it works for virtually any query. Even for views, or queries that contain calculated columns, joins, etc.
I've been trawling around in the internet for a demo on second order SQLi but I still haven't found one yet. Many sites don't really give a thorough explanation on how it works.
I need to present a short demonstration and I've been practicing using Mutillidae. Can anybody lead me in the right direction?
A Google search for 'second order sql injection' comes up with a number of more or less relevant explanations of what Second Order SQL Injection is, with differing degrees of detail (as you say).
The basic idea is that the database stores some text from the user that is later incorporated into an SQL statement — but the text is insufficiently sanitized before reuse.
Think of an application which allows a user to create user-defined queries against a database. A simple example might be a bug tracking system. Some of the user-defined query attributes might be simple conditions such as 'bug status is "closed"'. This might be coded by looking at the stored query definition:
CREATE TABLE UserDefinedQuery
(
...user info...,
bug_status VARCHAR(20),
...other info...
);
SELECT ..., bug_status, ...
INTO ..., hv_bug_status, ...
FROM UserDefinedQuery
WHERE bug_status IS NOT NULL
AND ...other criteria...
where hv_bug_status is a host variable (PHP, C, whatever language you're using) holding the bug status criterion.
If this value is = 'closed', then the resulting SQL might contain:
SELECT *
FROM Bugs
WHERE status = 'closed'
AND ...other criteria...
Now suppose that when the user defined their query, they wrote instead:
= 'open' or 1=1
This means that the generated query now looks like:
SELECT *
FROM Bugs
WHERE status = 'open' or 1=1
AND ...other criteria...
The presence of the OR changes the meaning of the query dramatically and will show all sorts of other records that were not the ones that the user was intended to see. This is a bug in the bug querying application. If this modification means that CustomerX can see bugs reported by other customers CustomerY and CustomerZ that they are not supposed to see, then CustomerX has managed to create a second order SQL injection attack. (If the injection simply means that they get to see more records than they should, including ones that aren't relevant to them, then they've simply created a buggy query.)
Clearly, in a VARCHAR(20) field, your options for injecting lethal SQL are limited simply because SQL is a verbose language. But 'little Bobby Tables' could strike if the criteria are stored in a longer field.
='';DELETE Bugs;--
(Using a non-standard contraction for the DELETE statement; that squeaks in at 18 characters.)
How can you avoid this? Don't allow the user to write raw SQL fragments that you include in the generated SQL. Treat the value in UserDefinedQuery.Bug_Status as a space/comma separated list of string values, and build the query accordingly:
SELECT *
FROM Bugs
WHERE status IN ('=', '''open''', 'or', '1=1')
AND ...other criteria...
The query may not be useful, but it doesn't get its structure altered by the data in the UserDefinedQuery table.
I have an SQLCLR trigger. It contains a large and messy SELECT inside, with parts like:
(CASE WHEN EXISTS(SELECT * FROM INSERTED I WHERE I.ID = R.ID)
THEN '1' ELSE '0' END) AS IsUpdated -- Is selected row just added?
as well as JOINs etc. I like to have the result as a single table with all included.
Question 1. Can I move this SELECT to SQL Server side? If yes, how to do this?
Saying "move", I mean to create a stored procedure or something else that can be executed before reading dataset in while cycle.
The 2 following questions make sense only if answer is "yes".
Why do I want to move SELECT? First off, I don't like mixing SQL with C# code. At second, I suppose that server-side queries run faster, since the server have more chances to cache them.
Question 2. Am I right? Is it some sort of optimizing?
Also, the SELECT contains constant strings, but they are localizable. For instance,
WHERE R.Status = "Enabled"
"Enabled" should be changed for French, German etc. So, I want to write 2 static methods -- OnCreate and OnDestroy -- then mark them as stored procedures. When registering/unregistering my assembly on server side, just call them respectively. In OnCreate format the SELECT string, replacing {0}, {1}... with required values from the assembly resources. Then I can localize resources only, not every script.
Question 3. Is it good idea? Is there an existing attribute to mark methods to be executed by SQL Server automatically after (un)registartion an assembly?
Regards,
Well, the SQL-CLR trigger will also execute on the server, inside the server process - so that's server-side as well, no benefit there.
But I agree - triggers ought to be written in T-SQL whenever possible - no real big benefit in having triggers in C#.... can you show the the whole trigger code?? Unless it contains really odd balls stuff, it should be pretty easy to convert to T-SQL.
I don't see how you could "move" the SELECT to the SQL side and keep the rest of the code in C# - either your trigger is in T-SQL (my preference), or then it is in C#/SQL-CLR - I don't think there's any way to "mix and match".
To start with, you probably do not need to do that type of subquery inside of whatever query you are doing. The INSERTED table only has rows that have been updated (or inserted but we can assume this is an UPDATE Trigger based on the comment in your code). So you can either INNER JOIN and you will only match rows in the Table with the alias of "R" or you can LEFT JOIN and you can tell which rows in R have been updated as the ones showing NULL for all columns were not updated.
Question 1) As marc_s said below, the Trigger executes in the context of the database. But it goes beyond that. ALL database related code, including SQLCLR executes in the database. There is no client-side here. This is the issue that most people have with SQLCLR: it runs inside of the SQL Server context. And regarding wanting to call a Stored Proc from the Trigger: it can be done BUT the INSERTED and DELETED tables only exist within the context of the Trigger itself.
Question 2) It appears that this question should have started with the words "Also, the SELECT". There are two things to consider here. First, when testing for "Status" values (or any Lookup values) since this is not displayed to the user you should be using numeric values. A "status" of "Enabled" should be something like "1" so that the language is not relevant. A side benefit is that not only will storing Status values as numbers take up a lot less space, but they also compare much faster. Second is that any text that is to be displayed to the user that needs to be sensitive to language differences should be in a table so that you can pass in a LanguageId or LocaleId to get the appropriate French, German, etc. strings to display. You can set the LocaleId of the user or system in general in another table.
Question 3) If by "registration" you mean that the Assembly is either CREATED or DROPPED, then you can trap those events via DDL Triggers. You can look here for some basics:
http://msdn.microsoft.com/en-us/library/ms175941(v=SQL.90).aspx
But CREATE ASSEMBLY and DROP ASSEMBLY are events that are trappable.
If you are speaking of when Assemblies are loaded and unloaded from memory, then I do not know of a way to trap that.
Question 1.
http://www.sqlteam.com/article/stored-procedures-returning-data
Question 3.
It looks like there are no appropriate attributes, at least in Microsoft.SqlServer.Server Namespace.
I have a relatively simple select statement in a VB6 program that I have to maintain. (Suppress your natural tendency to shudder; I inherited the thing, I didn't write it.)
The statement is straightforward (reformatted for clarity):
select distinct
b.ip_address
from
code_table a,
location b
where
a.code_item = b.which_id and
a.location_type_code = '15' and
a.code_status = 'R'
The table in question returns a list of IP addresses from the database. The key column in question is code_status. Some time ago, we realized that one of the IP addresses was no longer valid, so we changed its status to I (invalid) to exclude it from appearing in the query's results.
When you execute the query above in SQL Plus, or in SQL Developer, everything is fine. But when you execute it from VB6, the check against code_status is ignored, and the invalid IP address appears in the result set.
My first guess was that the results were cached somewhere. But, not being an Oracle expert, I have no idea where to look.
This is ancient VB6 code. The SQL is embedded in the application. At the moment, I don't have time to rewrite it as a stored procedure. (I will some day, given the chance.) But, I need to know what would cause this disparity in behavior and how to eliminate it. If it's happening here, it's likely happening somewhere else.
If anyone can suggest a good place to look, I'd be very appreciative.
Some random ideas:
Are you sure you committed the changes that invalidate the ip-address? Can someone else (using another db connection / user) see the changed code_status?
Are you sure that the results are not modified after they are returned from the database?
Are you sure that you are using the "same" database connection in SQLPlus as in the code (database, user etc.)?
Are you sure that that is indeed the SQL sent to the database? (You may check by tracing on the Oracle server or by debugging the VB code). Reformatting may have changed "something".
Off the top of my head I can't think of any "caching" that might "re-insert" the unwanted ip. Hope something from the above gives you some ideas on where to look at.
In addition to the suggestions that IronGoofy has made, have you tried swapping round the last two clauses?
where
a.code_item = b.wich_id and
a.code_status = 'R' and
a.location_type_code = '15'
If you get a different set of results then this might point to some sort of wrangling going on that results in dodgy SQL actually be sent to the database.
There are Oracle bugs that result in incorrect answers. This surely isn't one of those times. Usually they involve some bizarre combination of views and functions and dblinks and lunar phases...
It's not cached anywhere. Oracle doesn't cache results until 11 and even then it knows to change the cache when the answer may change.
I would guess this is a data issue. You have a DISTINCT on the IP address in the query, why? If there's no unique constraint, there may be more than one copy of your IP address and you only fixed one of them.
And your Code_status is in a completely different table from your IP addresses. You set the status to "I" in the code table and you get the list of IPs from the Location table.
Stop thinking zebras and start thinking horses. This is almost certainly just data you do not fully understand.
Run this
select
a.location_type_code,
a.code_status
from
code_table a,
location b
where
a.code_item = b.which_id and
b.ip_address = <the one you think you fixed>
I bet you get one row with an 'I' and another row with an 'R'
I'd suggest you have a look at the V$SQL system view to confirm that the query you believe the VB6 code is running is actually the query it is running.
Something along the lines of
select sql_text, fetches
where sql_text like '%ip_address%'
Verify that the SQL_TEXT is the one you expect and that the FETCHES count goes up as you execute the code.
I'm looking for a pattern for performing a dynamic search on multiple tables.
I have no control over the legacy (and poorly designed) database table structure.
Consider a scenario similar to a resume search where a user may want to perform a search against any of the data in the resume and get back a list of resumes that match their search criteria. Any field can be searched at anytime and in combination with one or more other fields.
The actual sql query gets created dynamically depending on which fields are searched. Most solutions I've found involve complicated if blocks, but I can't help but think there must be a more elegant solution since this must be a solved problem by now.
Yeah, so I've started down the path of dynamically building the sql in code. Seems godawful. If I really try to support the requested ability to query any combination of any field in any table this is going to be one MASSIVE set of if statements. shiver
I believe I read that COALESCE only works if your data does not contain NULLs. Is that correct? If so, no go, since I have NULL values all over the place.
As far as I understand (and I'm also someone who has written against a horrible legacy database), there is no such thing as dynamic WHERE clauses. It has NOT been solved.
Personally, I prefer to generate my dynamic searches in code. Makes testing convenient. Note, when you create your sql queries in code, don't concatenate in user input. Use your #variables!
The only alternative is to use the COALESCE operator. Let's say you have the following table:
Users
-----------
Name nvarchar(20)
Nickname nvarchar(10)
and you want to search optionally for name or nickname. The following query will do this:
SELECT Name, Nickname
FROM Users
WHERE
Name = COALESCE(#name, Name) AND
Nickname = COALESCE(#nick, Nickname)
If you don't want to search for something, just pass in a null. For example, passing in "brian" for #name and null for #nick results in the following query being evaluated:
SELECT Name, Nickname
FROM Users
WHERE
Name = 'brian' AND
Nickname = Nickname
The coalesce operator turns the null into an identity evaluation, which is always true and doesn't affect the where clause.
Search and normalization can be at odds with each other. So probably first thing would be to get some kind of "view" that shows all the fields that can be searched as a single row with a single key getting you the resume. then you can throw something like Lucene in front of that to give you a full text index of those rows, the way that works is, you ask it for "x" in this view and it returns to you the key. Its a great solution and come recommended by joel himself on the podcast within the first 2 months IIRC.
What you need is something like SphinxSearch (for MySQL) or Apache Lucene.
As you said in your example lets imagine a Resume that will composed of several fields:
List item
Name,
Adreess,
Education (this could be a table on its own) or
Work experience (this could grow to its own table where each row represents a previous job)
So searching for a word in all those fields with WHERE rapidly becomes a very long query with several JOINS.
Instead you could change your framework of reference and think of the Whole resume as what it is a Single Document and you just want to search said document.
This is where tools like Sphinx Search do. They create a FULL TEXT index of your 'document' and then you can query sphinx and it will give you back where in the Database that record was found.
Really good search results.
Don't worry about this tools not being part of your RDBMS it will save you a lot of headaches to use the appropriate model "Documents" vs the incorrect one "TABLES" for this application.