I am inheriting an application which has to read data from various types of files and use the OCI interface to move the data into an Oracle database. Most of the tables in question have about 40-50 columns, so the SQL insert statements become pretty lengthy.
When I inherited this code, it basically built up the insert statements via a series of strcats as a C string, then passed it to the appropriate OCI functions to set up and execute the statement. However, since much of the data is read directly from files into the column values, this leaves the application open to easy SQL injection. So I am trying to use bind variables to solve this problem.
In every example OCI application I can find, each variable is statically allocated and bound individually. This would lead to quite a bit of boilerplate, however and I'd like to reduce it to some sort of looping construct. So my solution is to, for each table, create a static array of strings containing the names of the table columns:
const char const *TABLE_NAME[N_COLS] = {
"COL_1",
"COL_2",
"COL_3",
...
"COL_N"
};
along with a short function that makes a placeholder out of a column name:
void makePlaceholder(char *buf, const char *col);
// "COLUMN_NAME" -> ":column_name"
So I then loop through each array and bind my values to each column, generating the placeholders as I go. One potential problem here is that, because the types of each column vary, I bind everything as SQLT_STR (strings) and thus expect Oracle to convert to the proper datatype on insertion.
So, my question(s) are:
What is the proper/idiomatic (if such a thing exists for SQL/OCI) to use bind variables for SQL insert statements with a large number of columns/params? More generally, what is the best way to use OCI to make this type of large insert statement?
Do large numbers of bind calls have a significant cost in efficiency compared to building and using vanilla C strings?
Is there any risk in binding all variables as strings and allowing Oracle to make the proper type conversion?
Thanks in advance!
Not sure about the C aspects of this. My answer will be from a DBA perspective.
Question 2:
Always use bind variables. It prevent SQL-injection and enhances performance.
The performance aspect is often overlooked by programmers. When Oracle receives a SQL it makes a hash of the entire SQL-text and looks in it's internal repository of execution plans to see if it has one. If bind variables was used it the SQL-text will be the same each time you run the query, not matter what the value of a variable is. However if you have concatenated the string your self Oracle will hash the SQL-text including content of (what you aught to have put in) variables, getting a unique hash every time. So if you do a query one million times Oracle will if you used bind variables make one execution plan, while if you did not use bind variables it will make one million execution plans and waste loads of resources doing that.
Related
I have an ABAP class method, say, select_something. select_something has an exporting parameter, say, et_result. et_result is of type standard table because the type of et_result cannot be determined until runtime.
The method sometimes gives a short dump saying With ABAP/4 Open SQL array select, the output table is too small at "select * into table et_result from (lv_tablename) where..."
Error analysis:
......in this particular case, the database table is 3806 bytes wide, but the internal table is only 70 bytes wide.
I tried "any table" too and the error is the same.
You could return a data reference. Your query will no longer fail, and you can assign the data to a correctly typed field symbol afterwards.
" Definition
class-methods select_all
importing
!tabname type string
returning
value(results) type ref to data.
...
...
" Implementation
method select_all.
data dref type ref to data.
create data dref type standard table of (tabname).
field-symbols <tab> type any table.
assign dref->* to <tab>.
select * from (tabname) into table <tab>.
get reference of <tab> into results.
endmethod.
Also, I agree with #vwegert that dynamic queries (and programming for that matter) should be avoided when possible.
What you're trying to do looks horribly wrong on many levels. NEVER use SELECT FROM (whatever) unless someone points a gun at your head AND the door is locked tight. You'll loose every kind of static error checking the system might be able to provide you with. For example, the compiler will no longer be able to tell you "Hey, that table you're reading from is 3806 bytes wide." It simply can't tell, even if you use constants. You'll find that out the hard way, producing short dumps, especially when switching between unicode and NUC systems, quite likely some in production systems. No fun.
(Actually there are a few - very very VERY few - good uses for dynamic table names in the SELECT statement. I need them about once every two to three years, and I code quite a lot weird stuff. Just avoid them wherever you can, even at the cost of writing more code. It's just not worth the trouble fixing broken stuff later.)
Then, changing the generic formal parameter type does not do anything to the type of the actual parameter. If you pass a STANRDARD TABLE OF mandt WITH DEFAULT KEY to your method, that table will have lines of 3 characters. It will be a STANDARD TABLE, and as such, it will also be an ANY TABLE, and that's about it. You can twist the generic types anywhere you like, there's no way to enforce correctness using generic types the way you use them. It's up to the caller to make sure that all the right types are used. That's a bad way to fly.
First off, I agree with vwegert's response, try to avoid dynamic sql selections if you can
That said, check the short dump. If the error is an exception class, you can wrap the SELECT statement in a try/catch block and at least stop it from dumping.
You can also try "INTO CORRESPONDING FIELDS OF TABLE et_result". If ET_RESULT is dynamic, you might have to cast it into the proper structure using RTTS. This might give you some ideas...
Couldn't agree more to vwegert, but if there is absolutely no other way (and there usually is) of performing your task than using dynamic select statements and dynamically typed parameters, do some checks on the type of the table and the parameter at runtime.
Use CL_ABAP_TYPEDESCR and its subclasses to do so.
This way, you can handle errors at runtime without your program dumping,
But as vwegert said, this dynamic stuff is pure evil and will most certainly break at some point during runtime. Adding the necessary error handling will most likely be a lot more work and a lot harder than redesigning your code to none dynamic SQL and typed parameters
OK, at the first glance, it seems that it must be more efficient to use SQLPrepare+SQLBindParameter+SQLExecute than format string (e.g. with CString::Format) and pass the whole complete query string to SQLExecDirect. If not, why would there exist the second method (SQLPrepare+SQLBindParameter+SQLExecute) at all?
BUT... here is what I think: The driver has sooner or later (I suspect later, but anyway...) convert the parameters (that I feed it with SQLBindParameter) into string representation right? (Or maybe not?) So if I make this formatting in my application (printf-like formatting), will I have any loss in performance?
One thing I suspect is that when the connection is over the network, passing parameters as raw data and then formatting them at server end might decrease the network traffic, instead of passing preformatted query strings, but lets ignore the network traffic for a moment. If not that, is there any performance gain in using SQLPrepare+SQLBindParameter+SQLExecute instead of formatting full query string in application and then using SQLExecDirect?
For me using SQLExecDirect is simpler and more convenient, so I need good answer on whether (and if) I should opt to another approach.
Important: If you will say that SQLPrepare+SQLBindParameter+SQLExecute approach will give better performance, I'd like to know how much! I don't mind theoretical assumptions, I'd like to know when is it worth practically? My current use-case is not very db-intensive, I won't have more than 100 inserts/updates per second, is it ok to use SQLExecDirect? In what scenarios - if ever - do I have to use SQLPrepare+SQLBindParameter+SQLExecute?
If you are inserting or updating with the same SQL (excluding parameters) repeatedly then SQLPrepare, SQLBindParameter and SQLExecute will be faster than SQLExecDirect every time. Consider:
SQLPrepare("insert into mytable (cola, colb) values(?,?);");
for (n = 0; n < 10000; n++) {
SQLBindParameter(1, n);
SQLBindParameter(2, n);
SQLExecute;
}
and
for (n = 0; n < 10000; n++) {
char sql[1000];
sprintf("insert into mytable (cola, colb) values(%d,%d)", n, n);
SQLExecDirect(sql);
}
In the first example, the statement is prepared once and hence the db engine only has to parse it once and work out an execution plan once. In the second example the sql and parameters are passed every time and the SQL looks different every time so it is parsed each time.
In addition, in the first example you can use arrays of parameters to pass multiple rows of parameters in one go - see SQL_PARAMSET_SIZE.
See 3.1.2 Inserting data for a worked example and an indication of how much time you can save.
Ignore network traffic, you'll just be second guessing what happens under the hood in the driver.
ADDITION:
Regarding your description of what happens with parameters where you seem to think the driver will convert them to strings; the other advantage of binding parameters is you can provide them in one type and ask the driver to use them as another type. You may find you'll come across a parameter type which cannot easily be represented as a string without adding some sort of conversion function which could be avoided with a parameter.
Yes, it's a bad idea, and for two reasons:
Performance
SQLPrepare causes the SQL statement to be parsed, and depending on the SQL statement it can be time consuming. If you're using a DB on another server, it might get sent to it for parsing. Even if the parsing takes only e.g. 10% time of executing your whole query, you save that time when executing the statement twice. That may be the case when you're inserting multiple rows, or call a "select" another time.
Of course the SQL statement passed must always be a static string. Some SQL frameworks may even do prepared statement caching for you. I don't know if ODBC does this. If you want to have real performance numbers, you have to measure for yourself - every query is different (and even might depend on the table contents, too).
SQL Injection
No matter what you say where the data comes from that you're formatting with CString::Format or any other method, you might be at risk for SQL injection. Even if you're using strings from your sources, sometimes later you or someone other may be changing your code to accept data from outside, and then you're vulnerable to SQL injection. If you need more info about SQL injection, just search StackOverflow, I'm sure there are some good questions about it, or see this image:
I am fully familiar with the following method in the link for performing a dynamic pivot query. Is there an alternative method to perform a dynamic pivot without storing the Query as a String and inserting a column string inside it?
http://www.simple-talk.com/community/blogs/andras/archive/2007/09/14/37265.aspx
Short answer: no.
Long answer:
Well, that's still no. But I will try to explain why. As of today, when you run the query, the DB engine demands to be aware of the result set structure (number of columns, column names, data types, etc) that the query will return. Therefore, you have to define the structure of the result set when you ask data from DB. Think about it: have you ever ran a query where you would not know the result set structure beforehand?
That also applies even when you do select *, which is just a sugar syntax. At the end, the returning structure is "all columns in such table(s)".
By assembling a string, you dynamically generate the structure that you desire, before asking for the result set. That's why it works.
Finally, you should be aware that assembling the string dynamically can theoretically and potentially (although not probable) get you a result set with infinite columns. Of course, that's not possible and it will fail, but I'm sure you understood the implications.
Update
I found this, which reinforces the reasons why it does not work.
Here:
SSIS relies on knowing the metadata of the dataflow in advance and a
dynamic pivot (which is what you are after) is not compatible with
that.
I'll keep looking and adding here.
I'm here to share a consolidated analysis for the following scenario:
I've an 'Item' table and I've a search SP for it. I want to be able to search for multiple ItemCodes like:
- Table structure : Item(Id INT, ItemCode nvarchar(20))
- Filter query format: SELECT * FROM Item WHERE ItemCode IN ('xx','yy','zz')
I want to do this dynamically using stored procedure. I'll pass an #ItemCodes parameter which will have comma(',') separated values and the search shud be performed as above.
Well, I've already visited lot of posts\forums and here're some threads:
Dynamic SQL might be a least complex way but I don't want to consider it because of the parameters like performance,security (SQL-Injection, etc..)..
Also other approaches like XML, etc.. if they make things complex I can't use them.
And finally, no extra temp-table JOIN kind of performance hitting tricks please.
I've to manage the performance as well as the complexity.
T-SQL stored procedure that accepts multiple Id values
Passing an "in" list via stored procedure
I've reviewed the above two posts and gone thru some solutions provided, here're some limitations:
http://www.sommarskog.se/arrays-in-sql-2005.html
This will require me to 'declare' the parameter-type while passing it to the SP, it distorts the abstraction (I don't set type in any of my parameters because each of them is treated in a generic way)
http://www.sqlteam.com/article/sql-server-2008-table-valued-parameters
This is a structured approach but it increases complexity, required DB-structure level changes and its not abstract as above.
http://madprops.org/blog/splitting-text-into-words-in-sql-revisited/
Well, this seems to match-up with my old solutions. Here's what I did in the past -
I created an SQL function : [GetTableFromValues] (returns a temp table populated each item (one per row) from the comma separated #ItemCodes)
And, here's how I use it in my WHERE caluse filter in SP -
SELECT * FROM Item WHERE ItemCode in (SELECT * FROM[dbo].[GetTableFromValues](#ItemCodes))
This one is reusable and looks simple and short (comparatively of course). Anything I've missed or any expert with a better solution (obviously 'within' the limitations of the above mentioned points).
Thank you.
I think using dynamic T-SQL will be pragmatic in this scenario. If you are careful with the design, dynamic sql works like a charm. I have leveraged it in countless projects when it was the right fit. With that said let me address your two main concerns - performance and sql injection.
With regards to performance, read T-SQL reference on parameterized dynamic sql and sp_executesql (instead of sp_execute). A combination of parameterized sql and using sp_executesql will get you out of the woods on performance by ensuring that query plans are reused and sp_recompiles avoided! I have used dynamic sql even in real-time contexts and it works like a charm with these two items taken care of. For your satisfaction you can run a loop of million or so calls to the sp with and without the two optimizations, and use sql profiler to track sp_recompile events.
Now, about SQL-injection. This will be an issue if you use an incorrect user widget such as a textbox to allow the user to input the item codes. In that scenario it is possible that a hacker may write select statements and try to extract information about your system. You can write code to prevent this but I think going down that route is a trap. Instead consider using an appropriate user widget such as a listbox (depending on your frontend platform) that allows multiple selection. In this case the user will just select from a list of "presented items" and your code will generate the string containing the corresponding item codes. Basically you do not pass user text to the dynamic sql sp! You can even use slicker JQuery based selection widgets but the bottom line is that the user does not get to type any unacceptable text that hits your data layer.
Next, you just need a simple stored procedure on the database that takes a param for the itemcodes (for e.g. '''xyz''','''abc'''). Internally it should use sp_executesql with a parameterized dynamic query.
I hope this helps.
-Tabrez
This is hopefully just a simple question involving performance optimizations when it comes to queries in Sql 2008.
I've worked for companies that use Stored Procs a lot for their ETL processes as well as some of their websites. I've seen the scenario where they need to retrieve specific records based on a finite set of key values. I've seen it handled in 3 different ways, illustrated via pseudo-code below.
Dynamic Sql that concatinates a string and executes it.
EXEC('SELECT * FROM TableX WHERE xId IN (' + #Parameter + ')'
Using a user defined function to split a delimited string into a table
SELECT * FROM TableY INNER JOIN SPLIT(#Parameter) ON yID = splitId
USING XML as the Parameter instead of a delimited varchar value
SELECT * FROM TableZ JOIN #Parameter.Nodes(xpath) AS x (y) ON ...
While I know creating the dynamic sql in the first snippet is a bad idea for a large number of reasons, my curiosity comes from the last 2 examples. Is it more proficient to do the due diligence in my code to pass such lists via XML as in snippet 3 or is it better to just delimit the values and use an udf to take care of it?
There is now a 4th option - table valued parameters, whereby you can actually pass a table of values in to a sproc as a parameter and then use that as you would normally a table variable. I'd be preferring this approach over the XML (or CSV parsing approach)
I can't quote performance figures between all the different approaches, but that's one I'd be trying - I'd recommend doing some real performance tests on them.
Edit:
A little more on TVPs. In order to pass the values in to your sproc, you just define a SqlParameter (SqlDbType.Structured) - the value of this can be set to any IEnumerable, DataTable or DbDataReader source. So presumably, you already have the list of values in a list/array of some sort - you don't need to do anything to transform it into XML or CSV.
I think this also makes the sproc clearer, simpler and more maintainable, providing a more natural way to achieve the end result. One of the main points is that SQL performs best at set based/not looping/non string manipulation activities.
That's not to say it will perform great with a large set of values passed in. But with smaller sets (up to ~1000) it should be fine.
UDF invocation is a little bit more costly than splitting the XML using the built-in function.
However, this only needs to be done once per query, so the performance difference will be negligible.