Is parameterisation needed in this instance? - sql

Im designing a UWP app that uses an SQLite database to store its information. From previous research I have blearnt that using the SQLite function SQLiteConnection.Update() and SQLiteConnetion.Insert() functions are safe to use as the inputs are sanitised before entering in the database.
The next step I need to do is sync that data with an online database - in this case SQL Server - using a service layer as my go between. Given that the data was previously sanitised by the SQLite database insert, do I still need to parameterise the object values using the service layer before they are passed to my SQL Server database?
The simple assumption says yes because, despite them being sanitised by the SQLite input, they are technically still raw strings that could have an effect on the main database if not parameterised when sending them there.
Should I just simply employ the idea of "If in doubt, parameterise" ?

I would say that you should always use SQL parameters. There are a few reasons why you should do so:
Security.
Performance. If you use parameters the reuse of execution plans could increase. For details see this article.
Reliability. It is always easier to make a mistake if you build SQL commands by concatenating strings.

Related

Is it ever okay to accept client-side SQL? If so, how to validate?

I have an application in which I'd like to accept a user supplied SQL query from a front-end query builder (http://querybuilder.js.org/). That query eventually needs to make it's way to running in a postgres database to return a subset of data.
The query builder linked above can export SQL or a mongo query. I imagine using the mongo query is relatively safe, since I can add to it simply on the server:
query.owner_of_document = userId
to limit results (to documents owned by the user).
Whereas the SQL statement could potentially be hijacked in an injection attack if someone attempts to store a malicious string of SQL for execution.
Is directly accepting SQL from a client bad practice? How can I ensure the supplied SQL is safe?
Thanks!
Why do you need to accept an entire SQL statement?
Can you accept only parameters and then run a pre defined query?
There are loads of questions/answers on SO relating to SQL injection and using parameters is a first step in avoiding injection attacks, such as "Are Parameters really enough to prevent Sql injections?"
But I think this answer to a different question sums things up well:
Don't try to do security yourself. Use whatever trusted, industry
standard library there is available for what you're trying to do,
rather than trying to do it yourself. Whatever assumptions you make
about security, might be incorrect. As secure as your own approach may
look ... there's a risk you're overlooking something and do you
really want to take that chance when it comes to security?

Is there a way to reference a SQL statement to the C# EF code which generated the SQL?

When I troubleshoot a large .NET app which uses only stored procedures, I capture the sql which includes the SP name from SQL Server Profiler and then it's easy to do a global search for the SP in the source files and find the exact line which produced the SQL.
When using Entity Framework, this is not possible due to the dynamic creation of SQL statements. However there are times when I capture some problematic sql statements from production and want to know where in the code they were generated from.
I know one can have EF generate logs and tracing on demand. This probably would be taxing for a busy server and produces too much logs. I read some stuff about using mini profiler but not sure if it fits my needs as I don't have access to the production server. I do however have access to attach SQL Server Profiler to the database server.
My idea is to find a way to have EF attach/inject a unique code to the generated SQL but it doesn't affect the outcome of the SQL. I can then use it to cross reference it to the line of code which injected it into the SQL. The unique code is static which means a unique static code is used for every EF linq statement. Maybe sent as a dummy sql or a comment along with the sql statement.
I know this will add some extra traffic but in my case, it will add extra flexibility and cut a lot of troubleshooting time.
Any ideas of how to do this or any alternatives?
One very simple approach would be to execute something via ExecuteStoreCommand(): Refresh data from stored procedure. I'm not sure if you can "execute" just a comment, but at the very least you should be able to do something like:
ExecuteStoreCommand("DECLARE #MyTag VARCHAR(100) = 'some_unique_id';");
This is very simple, but you would have to find the association in two steps:
Get the SessionID (i.e. SPID) from poorly performing query in SQL Server Profiler
Search the Profiler entries for the prior SQL statement for that same SPID
Another option that might be a little more complicated but would remove that additional step when it comes to making that association is to "intercept" the commands before they get executed and inject a comment with your unique id. Please see the following S.O. Answer for details. You shouldn't need the full extent of what they did, but even if you do, it seems like all of the code (or all the relevant stuff) is there:
Adding a query hint when calling Table-Valued Function
By the way, this situation is a point in favor of using Stored Procedures instead of an ORM. And, what do you expect to be able to do in terms of performance tuning once you do find the offending app code? (another point in favor of using Stored Procedures instead of an ORM ;-).

Build SQL Query in client or server side?

I have doubts between two options:
Build the query in client side and send it to server.
Sending from client the needed knowledge in order to build the query in server side.
In which side I will prefer to build the query?
Advantages / Disadvantages?
Thanks.
I tend to prefer building queries on the server side and either storing them as Stored Procedures on the sql server or building the query string in a backend language like PHP.
Building the query in something like javascript and sending to the server, creates the possibility of deviants doing inline altering of your javascript and submitting the query string through something like firebug. If you build the query string in a backend (server-side) language, the only thing the user would have access to alter would be input variables (if applicable). Because of this, you should always check and cleanse all input variables for sql injection.
Removing as much access to raw code from the end user as possible seems to always be the best option, in terms of application security. Someone else may weigh-in about performance limitations; but if a user alters and submits their own query string through a javascript console and drops your entire table, performance won't really be a factor anymore will it?

SAP SAVE_TEXT function module is that secure against SQL Injection?

I have question regarding abap function module SAVE_TEXT. I assume that it is possible to create custom tdobject and tdid, then the longtextes are to be stored in the tables STXH, STXL. How secure is the SAVE_TEXT against SQL injection attacks? Is it not vulnerable because of encoding the textes in RAW format?
Your first assumption was either lost in translation or wrong in the first place - the valid values of TDOBJECT and TDID are maintained manually using the transaction SE75, usually by the application developer. They are not created as part of the everyday application processing.
As far as the database access is concerned, there are two security levels to protect against SQL injection, although one was not designed to be a security level:
The contents of the text are stored in an internal form that is serialized as a byte string. Whatever SQL commands might have been present in the original text do not make it through this conversion.
The DML commands are passed through the usual database interface layer that uses prepared statements with a fixed set of variables that are supplied with values only when executing the statements. As far as I can see, no dynamic SQL statements are used to modify STX* texts.
For normal business applications, this should be safe enough. If you want to run a nuclear power plant, well - we would have to talk.

Streaming in and out of an SQL Server 2005 image field with C#?

After having checked extensively for an answer to this question, I still cannot find a satisfying solution. So here goes.
I need to store possibly large amount of data in a column of an SQL Server 2005 table. I absolutely need to work in a streaming fashion, so that :
When writing, the data is sent in chunks to the database. Is there some builtin way to achieve this from C# client-code using the classes in System.Data.SqlClient? Or do I need to resort to using ADODB.Net Stream object ? Not sure how to mix the two concepts (especially with regards to participating in the current SqlTransaction.
Alternatively, is there a way for a T-SQL stored procedure to append data to an existing column. With this approach, the stored procedure would be called multiple times from the client code and this would achieve the streaming requirement.
When reading, the data should be read one chunk at a time, in a streaming fashion.
Alternatively, is there a way for a T-SQL stored procedure to provide sequential or even random access to the contents of an image field?
Actually, there is a way, it just hurts a bit.
SQL Server 2005 supports updating part of a column:
http://technet.microsoft.com/en-us/library/ms177523(SQL.90).aspx
And you can do a read of part of a column with substring (yes, even binary - it'll cheerfully return an array of bytes).
The caveats here are that it's only an update - so you'd have to create a value in the varbinary(max) field first, then update it. Other than it, it's absolutely possible to deal with as though you're streaming data to/from SQL Server. I wrapped the retrieval / update functionality with a stream class to make my life easier.
Hope this helps.
Well, answering my own question.
The truth is... there is actually no way to do what I want. Either from pure client code or from server side stored procedures in T-SQL. Until we switch to SQL Server 2008, we will have to find another soluton.
However, there is actually a way to simulate this behavior, so that the streaming requirement is achieved. The solution lies in a collaboration between client and server code.
The server database should, for instance, expose the entire contents to be streamed as a set of records in a fragments table. Each record, representing a chunk of the entire contents to be streamed. Whereas in the client, the stream is read in sequence, and each chunk is then sent to the database to fill one record.
With suitable bookkeeping, the reading of the stored data can also be done in a streaming fashion.
Incidentally, that's what Microsoft BizTalk Server is doing, and that's how I find out.