Ran a security scan against an URL and received the report below:
The vulnerability affects
/rolecall.cfm , bbb_id
This is the rolecall.cfm code:
<cfscript>
if (isDefined("url") and isDefined("url.bbb_id")) {
if (url.dept_id eq -1)
_include("sql", "getB");
else
_include("sql", "getBNow");
}
/*...*/
_include("sql", "getDPlaces");
/*Set up the model and go*/
model = {
add = 1,
edit = 0,
remove = 0,
places = getDPlaces
};
</cfscript>
If you're using IIS, you should read this article to see how to add SQL Injection protection directly to the web server. This will keep attack requests from ever reaching ColdFusion.
Be cautious of the strings they suggest you deny:
<denyStrings>
<add string="--" />
<add string=";" />
<add string="/*" />
<add string="#" />
Make sure you never pass an email address as the value of a query string parameter, otherwise you'll reject a legitimate request. You can allow the # symbol if needed.
I would also highly suggest you take a look at HackMyCF, which will show you many other security concerns if they exist.
SQL Injection exploits databases by stuffing malicious sql commands into a query where they're not expected. Tricking the query into do something different than what it was designed to do, like performing a DROP or DELETE instead of a SELECT.
Queries that use raw client parameters like this, are vulnerable:
WHERE policy_funct_id = #url.dept_id#
Instead, always wrap client supplied parameters in cfqueryparam. It prevents them from being executed as a command. I don't know your column data types, so modify the cfsqltype as needed.
WHERE policy_funct_id = <cfqueryparam value="#url.dept_id#" cfsqltype="cf_sql_integer">
All of the dynamic table names are another (potential) vulnerability, like:
-- potential sql-injection risk
SELECT * FROM #db.root#
If #db.root# is user supplied, it's a sql-i risk. Unfortunately, cfqueryparam cannot be used on table names. Those must be manually (and carefully) validated.
Few other suggestions, unrelated to sql injection:
All the nested (select * from...) statements decrease readability. Instead, use a single level JOIN.
When using JOIN's, best to specify the source table (or table alias) for all columns. That avoids ambiguity and increases readability for yourself and anyone else reviewing the code. No need to guess which columns comes from which table.
Example
-- psuedo example
SELECT root.ColumnA
, root.ColumnB
, dept.ColumnC
, subcat.ColumnC
, etc...
FROM #db.root# root
INNER JOIN #db.content# content ON root.policy_root_id = content.content_id
INNER JOIN #db.dept# AS dept ON ON content.dept_id = dept.policy_funct_id
INNER JOIN #db.subcat# subcat ON subcat.dept_id = dept.policy_funct_id
WHERE dept.policy_funct_id = <cfqueryparam value="#url.dept_id#" cfsqltype="cf_sql_integer">
AND content.is_newest = 1
According to the Peoplebook here, CreateRowset function has the parameters {FIELD.fieldname, RECORD.recname} which is used to specify the related display record.
I had tried to use it like the following (just for example):
&rs1 = CreateRowset(Record.User, Field.UserId, Record.UserName);
&rs1.Fill();
For &k = 1 To &rs1.ActiveRowCount
MessageBox(0, "", 999999, 99999, &rs1(&k).UserName.Name.Value);
End-for;
(Record.User contains only UserId(key), Password.
Record.UserName contains UserId(key), Name.)
I cannot get the Value of UserName.Name, do I misunderstand the usage of this parameter?
Fill is the problem. From the doco:
Note: Fill reads only the primary database record. It does not read
any related records, nor any subordinate rowset records.
Having said that, it is the only way I know to bulk-populate a standalone rowset from the database, so I can't easily see a use for the field in the rowset.
Simplest solution is just to create a view, but that gets old very soon if you have to do it a lot. Alternative is to just loop through the rowset yourself loading the related fields. Something like:
For &k = 1 To &rs1.ActiveRowCount
&rs1(&k).UserName.UserId.value = &rs1(&k).User.UserId.value;
&rs1(&k).UserName.SelectByKey();
End-for;
This is an extremely common situation, so I'm expecting a good solution. Basically we need to update counters in our tables. As an example a web page visit:
Web_Page
--------
Id
Url
Visit_Count
So in hibernate, we might have this code:
webPage.setVisitCount(webPage.getVisitCount()+1);
The problem there is reads in mysql by default don't pay attention to transactions. So a highly trafficked webpage will have inaccurate counts.
The way I'm used to doing this type of thing is simply call:
update Web_Page set Visit_Count=Visit_Count+1 where Id=12345;
I guess my question is, how do I do that in Hibernate? And secondly, how can I do an update like this in Hibernate which is a bit more complex?
update Web_Page wp set wp.Visit_Count=(select stats.Visits from Statistics stats where stats.Web_Page_Id=wp.Id) + 1 where Id=12345;
The problem there is reads in mysql by default don't pay attention to transactions. So a highly trafficked webpage will have inaccurate counts.
Indeed. I would use a DML style operation here (see chapter 13.4. DML-style operations):
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
String hqlUpdate = "update webPage wp set wp.visitCount = wp.visitCount + 1 where wp.id = :id";
int updatedEntities = s.createQuery( hqlUpdate )
.setLong( "newName", 1234l )
.executeUpdate();
tx.commit();
session.close();
Which should result in
update Web_Page set Visit_Count=Visit_Count+1 where Id=12345;
And secondly, how can I do an update like this in Hibernate which is a bit more complex?
Hmm... I'm tempted to say "you're screwed"... need to think more about this.
A stored procedure offers several benefits:
In case the schema changes, the code need not change if it were call increment($id)
Concurrency issues can be localized.
Faster execution in many cases.
A possible implementation is:
create procedure increment (IN id integer)
begin
update web_page
set visit_count = visit_count + 1
where `id` = id;
end
Ok, I have a simple process...
Read a table and get the rows that
have a "StatusID" of 1. Simple.
Select ProductID from PreorderStatus where StatusID = 1
Foreach row returned from that
query, perform an action. For
simplicity sake, let's just modify
the original table to set the
"StatusID" to 2.
Update PreorderStatus set StatusID = 2 where ProductID = #ProductID
In order to do this in SSIS, I have created a simple "Execute SQL Task" with the first statement. In the editor I have set the Result Set to return a Full result set and the Result Name of 0 is set to fill an object variable named ReadySet.
The output is then routed to a For Each Loop container. The Enumerator is set to Foreach ADO Enumerator and the object source variable set to the ReadySet variable from above. I have also mapped the variable v_ProductID to index 0.
Setting a breakpoint at the begining of the Foreach loop shows the variable being set correctly. GREAT!! Now on to step two....
Now I have placed a new SQL task in the foreach container. Now I have a head scratcher. How do I actually use the variable in the SQL statement. Simply using "v___ProductID" or "User::v_ProductID" doesn't seem to work. Mapping a parameter seemed like a good idea (got a #ProductID and everything!) but that didn't seem to work either.
I get the feeling that I am missing something pretty simple but can't tell what. Thanks for any help!!
I think there is a better approach. Here are the approximate steps:
Drag a DataFlow task onto the design surface.
Open it up and add a OLE DB source and OLEDB Command components to the design surface.
Modify the source to use the query you have described.
Connect the source to the Command component.
Modify command component to use "Update PreorderStatus set StatusID = 2 where ProductID = ?" query and on param mapping page map the ? variable to the input coming from the datasource.
HTH
When I want to use an execute sql task and vary something based on a variable, I use a stored proc and make the variable the input parameter for the proc.
Then you set the parmeter in the execute SQL task and set the SQL statement to something like:
exec myproc ?
I've got a stored procedure that allows an IN parameter specify what database to use. I then use a pre-decided table in that database for a query. The problem I'm having is concatenating the table name to that database name within my queries. If T-SQL had an evaluate function I could do something like
eval(#dbname + 'MyTable')
Currently I'm stuck creating a string and then using exec() to run that string as a query. This is messy and I would rather not have to create a string. Is there a way I can evaluate a variable or string so I can do something like the following?
SELECT *
FROM eval(#dbname + 'MyTable')
I would like it to evaluate so it ends up appearing like this:
SELECT *
FROM myserver.mydatabase.dbo.MyTable
Read this... The Curse and Blessings of Dynamic SQL, help me a lot understanding how to solve this type of problems.
There's no "neater" way to do this. You'll save time if you accept it and look at something else.
EDIT: Aha! Regarding the OP's comment that "We have to load data into a new database each month or else it gets too large.". Surprising in retrospect that no one remarked on the faint smell of this problem.
SQL Server offers native mechanisms for dealing with tables that get "too large" (in particular, partitioning), which will allow you to address the table as a single entity, while dividing the table into separate files in the background, thus eliminating your current problem altogether.
To put it another way, this is a problem for your DB administrator, not the DB consumer. If that happens to be you as well, I suggest you look into partitioning this table.
try the sp_executesql built in function.
You can basically build up your SQL string in your proc, then call
exec sp_executesql #SQLString.
DECLARE #SQLString nvarchar(max)
SELECT #SQLString = '
SELECT *
FROM ' + #TableName
EXEC sp_executesql #SQLString
You can't specify a dynamic table name in SQL Server.
There are a few options:
Use dynamic SQL
Play around with synonyms (which means less dynamic SQL, but still some)
You've said you don't like 1, so lets go for 2.
First option is to restrict the messyness to one line:
begin transaction t1;
declare #statement nvarchar(100);
set #statement = 'create synonym temptablesyn for db1.dbo.test;'
exec sp_executesql #statement
select * from db_syn
drop synonym db_syn;
rollback transaction t1;
I'm not sure I like this, but it may be your best option. This way all of the SELECTs will be the same.
You can refactor this to your hearts content, but there are a number of disadvantages to this, including the synonym is created in a transaction, so you can't have two
of the queries running at the same
time (because both will be trying to
create temptablesyn). Depending
upon the locking strategy, one will
block the other.
Synonyms are permanent, so this is why you need to do this in a transaction.
There are a few options, but they are messier than the way you are already doing. I suggest you either:
(1) Stick with the current approach
(2) Go ahead and embed the SQL in the code since you are doing it anyway.
(3) Be extra careful to validate your input to avoid SQL Injection.
Also, messiness isn't the only problem with dynamic SQL. Remember the following:
(1) Dynamic SQL thwarts the server's ability to create a reusable execution plan.
(2) The ExecuteSQL command breaks the ownership chain. That means the code will run in the context of the user who calls the stored procedure NOT the owner of the procedure. This might force you to open security on whatever table the statement is running against and create other security issues.
Just a thought, but if you had a pre-defined list of these databases, then you could create a single view in the database that you connect to to join them - something like:
CREATE VIEW dbo.all_tables
AS
SELECT your_columns,
'db_name1' AS database_name
FROM db_name1.dbo.your_table
UNION ALL
SELECT your_columns,
'db_name2'
FROM db_name2.dbo.your_table
etc...
Then, you could pass your database name in to your stored procedure and simply use it as a paramater in a WHERE clause. If the tables are large, you might consider using an indexed view, indexed on the new database_name column (or whatever you call it) and the tables' primary key (I'm assuming from the question that the tables' schemas are the same?).
Obviously if your list of database changes frequently, then this becomes more problematic - but if you're having to create these databases anyway, then maintaining this view at the same time shouldn't be too much of an overhead!
I think Mark Brittingham has the right idea (here:
http://stackoverflow.com/questions/688425/evaluate-in-t-sql/718223#718223), which is to issue a use database command and write the sp to NOT fully qualify the table name. As he notes, this will act on tables in the login's current database.
Let me add a few possible elaborations:
From a comment by the OP, I gather the database is changed once a month, when it gets "too large". ("We have to load data into a new database each month or else it gets too large. – d03boy")
User logins have a default database, set with sp_defaultdb (deprecated) or ALTER LOGIN. If each month you move on to the new database, and don't need to run the sp on the older copies, just change the login's default db monthly, and again, don't fully qualify the table name.
The database to use can be set in the client login:
sqlcmd -U login_id -P password -d db_name, then exec the sp from there.
You can establish a connection to the database using the client of your choice (command line, ODBC, JDBC), then issue a use database command, the exec the sp.
use database bar;
exec sp_foo;
Once the database has been set using one of the above, you have three choices for executing the stored procedure:
You could just copy the sp along with the database, in to the new database. As long as the table name is NOT fully qualified, you'll operate on the new database's table.
exec sp_foo;
You could install the single canonical copy of the sp in its own database, call it procs, with the tablename not fully qualified, and then call its fuly qualified name:
exec procs.dbo.sp_foo;
You could, in each individual database, install a stub sp_foo that execs the fully qualified name of the real sp, and then exec sp_foo without qualifying it. The stub will be called, and it will call the real procedure in procs. (Unfortunately, use database dbname cannot be executed from within an sp.)
--sp_foo stub:
create proc bar.dbo.sp_foo
#parm int
as
begin
exec procs.dbo.sp_foo #parm;
end
go
However this is done, if the database is being changed, the real sp should be created with the WITH RECOMPILE option, otherwise it'll cache an execution plan for the wrong table. The stub of course doesn't need this.
You could create a SQL CLR Table-Valued UDF to access the tables. You have to tie it to the schema because TV-UDFs don't support dynamic schema. (My sample includes an ID and a Title column - modify for your needs)
Once you've done this, you should be able to do the follow query:
SELECT * FROM dbo.FromMyTable('table1')
You can include a multipart name in that string too.
SELECT * FROM dbo.FromMyTable('otherdb..table1')
to return the ID,Title columns from that table.
You will likely need to enable SQL CLR and turn on the TRUSTWORTHY option:
sp_configure 'clr enabled',1
go
reconfigure
go
alter database mydatabase set trustworthy on
Create a C# SQL Project, add a new UDF file, paste this in there. Set Project Property, Database, Permission Level to external. Build, deploy. Can be done without VisualStudio. Let me know if you need that.
using System;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Collections;
using System.Data.SqlClient;
[assembly: CLSCompliant(true)]
namespace FromMyTable
{
public static partial class UserDefinedFunctions
{
[Microsoft.SqlServer.Server.SqlFunction(DataAccess = DataAccessKind.Read, IsDeterministic = true, SystemDataAccess = SystemDataAccessKind.Read, IsPrecise = true, FillRowMethodName = "FillRow",
TableDefinition = "id int, title nvarchar(1024)")]
public static IEnumerable FromMyTable(SqlString tableName)
{
return new FromMyTable(tableName.Value);
}
public static void FillRow(object row, out SqlInt32 id, out SqlString title)
{
MyTableSchema v = (MyTableSchema)row;
id = new SqlInt32(v.id);
title = new SqlString(v.title);
}
}
public class MyTableSchema
{
public int id;
public string title;
public MyTableSchema(int id, string title) { this.id = id; this.title = title; }
}
internal class FromMyTable : IEnumerable
{
string tableName;
public FromMyTable(string tableName)
{
this.tableName = tableName;
}
public IEnumerator GetEnumerator()
{
return new FromMyTableEnum(tableName);
}
}
internal class FromMyTableEnum : IEnumerator
{
SqlConnection cn;
SqlCommand cmd;
SqlDataReader rdr;
string tableName;
public FromMyTableEnum(string tableName)
{
this.tableName = tableName;
Reset();
}
public MyTableSchema Current
{
get { return new MyTableSchema((int)rdr["id"], (string)rdr["title"]); }
}
object IEnumerator.Current
{
get { return Current; }
}
public bool MoveNext()
{
bool b = rdr.Read();
if (!b) { rdr.Dispose(); cmd.Dispose(); cn.Dispose(); rdr = null; cmd = null; cn = null; }
return b;
}
public void Reset()
{
// note: cannot use a context connection here because it will be closed
// in between calls to the enumerator.
if (cn == null) { cn = new SqlConnection("server=localhost;database=mydatabase;Integrated Security=true;"); cn.Open(); }
if (cmd == null) cmd = new SqlCommand("select id, title FROM " + tableName, cn);
if (rdr != null) rdr.Dispose();
rdr = cmd.ExecuteReader();
}
}
}
declare #sql varchar(256);
set #sql = 'select * into ##myGlobalTemporaryTable from '+#dbname
exec sp_executesql #sql
select * from ##myGlobalTemporaryTable
copies into a global temporary table which you can then use like a regular table
If you have a reasonably manageable number of databases, it may be best to use a pre-defined conditional statement like:
if (#dbname = 'db1')
select * from db1..MyTable
if (#dbname = 'db2')
select * from db2..MyTable
if (#dbname = 'db3')
select * from db3..MyTable
...
you can generate this proc as part of your database creation scripts if you are changing the list of databases available to query.
This avoids security concerns with dynamic sql. You can also improve performance by replacing the 'select' statements with stored procedures targeting each database (1 cached execution plan per query).
if exists (select * from master..sysservers where srvname = 'fromdb')
exec sp_dropserver 'fromdb'
go
declare #mydb nvarchar(99);
set #mydb='mydatabase'; -- variable to select database
exec sp_addlinkedserver #server = N'fromdb',
#srvproduct = N'',
#provider = N'SQLOLEDB',
#datasrc = ##servername,
#catalog = #mydb
go
select * from OPENQUERY(fromdb, 'select * from table1')
go