I'm trying to narrow down the results returned from a server generated SSRS report, but the customer is requesting too many fields to do be able to do it easily with parameters into a predefined SQL statement.
Is it possible to pass a statement into the reporting server from .NET that the server will execute as its datasource, instead of the preconfigured one? Either the complete statement or the WHERE clause would be fine.
If not, is it possible to eval a parameter sent into a stored procedure? I'm aware of the security implications.
Architecturally speaking, if the customer is requesting reports with infeasible amounts of parameters they might want to consider creating an Analysis Services model instead and using Excel or another tool to slice and dice the data to their hearts content.
I can't speak to the .NET option, but you can definitely use a stored procedure in a report data set, but I'm not sure how that would help you as it would still require parameters to be passed to it.
We decided to route a parameter into a stored procedure, which executes a sql query using the parameter. The other parameters use the Prompt as a friendlyname and the Name as the column name, and the program constructs a where clause from this information and passes it into the query parameter of the report. It's not a perfect solution, but we've closed all holes for injection we could and it works. Sometimes you've got to be happy with that.
Related
Im designing a UWP app that uses an SQLite database to store its information. From previous research I have blearnt that using the SQLite function SQLiteConnection.Update() and SQLiteConnetion.Insert() functions are safe to use as the inputs are sanitised before entering in the database.
The next step I need to do is sync that data with an online database - in this case SQL Server - using a service layer as my go between. Given that the data was previously sanitised by the SQLite database insert, do I still need to parameterise the object values using the service layer before they are passed to my SQL Server database?
The simple assumption says yes because, despite them being sanitised by the SQLite input, they are technically still raw strings that could have an effect on the main database if not parameterised when sending them there.
Should I just simply employ the idea of "If in doubt, parameterise" ?
I would say that you should always use SQL parameters. There are a few reasons why you should do so:
Security.
Performance. If you use parameters the reuse of execution plans could increase. For details see this article.
Reliability. It is always easier to make a mistake if you build SQL commands by concatenating strings.
When I troubleshoot a large .NET app which uses only stored procedures, I capture the sql which includes the SP name from SQL Server Profiler and then it's easy to do a global search for the SP in the source files and find the exact line which produced the SQL.
When using Entity Framework, this is not possible due to the dynamic creation of SQL statements. However there are times when I capture some problematic sql statements from production and want to know where in the code they were generated from.
I know one can have EF generate logs and tracing on demand. This probably would be taxing for a busy server and produces too much logs. I read some stuff about using mini profiler but not sure if it fits my needs as I don't have access to the production server. I do however have access to attach SQL Server Profiler to the database server.
My idea is to find a way to have EF attach/inject a unique code to the generated SQL but it doesn't affect the outcome of the SQL. I can then use it to cross reference it to the line of code which injected it into the SQL. The unique code is static which means a unique static code is used for every EF linq statement. Maybe sent as a dummy sql or a comment along with the sql statement.
I know this will add some extra traffic but in my case, it will add extra flexibility and cut a lot of troubleshooting time.
Any ideas of how to do this or any alternatives?
One very simple approach would be to execute something via ExecuteStoreCommand(): Refresh data from stored procedure. I'm not sure if you can "execute" just a comment, but at the very least you should be able to do something like:
ExecuteStoreCommand("DECLARE #MyTag VARCHAR(100) = 'some_unique_id';");
This is very simple, but you would have to find the association in two steps:
Get the SessionID (i.e. SPID) from poorly performing query in SQL Server Profiler
Search the Profiler entries for the prior SQL statement for that same SPID
Another option that might be a little more complicated but would remove that additional step when it comes to making that association is to "intercept" the commands before they get executed and inject a comment with your unique id. Please see the following S.O. Answer for details. You shouldn't need the full extent of what they did, but even if you do, it seems like all of the code (or all the relevant stuff) is there:
Adding a query hint when calling Table-Valued Function
By the way, this situation is a point in favor of using Stored Procedures instead of an ORM. And, what do you expect to be able to do in terms of performance tuning once you do find the offending app code? (another point in favor of using Stored Procedures instead of an ORM ;-).
I have question regarding abap function module SAVE_TEXT. I assume that it is possible to create custom tdobject and tdid, then the longtextes are to be stored in the tables STXH, STXL. How secure is the SAVE_TEXT against SQL injection attacks? Is it not vulnerable because of encoding the textes in RAW format?
Your first assumption was either lost in translation or wrong in the first place - the valid values of TDOBJECT and TDID are maintained manually using the transaction SE75, usually by the application developer. They are not created as part of the everyday application processing.
As far as the database access is concerned, there are two security levels to protect against SQL injection, although one was not designed to be a security level:
The contents of the text are stored in an internal form that is serialized as a byte string. Whatever SQL commands might have been present in the original text do not make it through this conversion.
The DML commands are passed through the usual database interface layer that uses prepared statements with a fixed set of variables that are supplied with values only when executing the statements. As far as I can see, no dynamic SQL statements are used to modify STX* texts.
For normal business applications, this should be safe enough. If you want to run a nuclear power plant, well - we would have to talk.
It seems that one could stop all threat of Sql injection once and for all by simply rejecting all queries that don't use named parameters. Any way to configure Sql server to do that? Or else any way to enforce that at the application level by inspecting each query without writing an entire SQL parser? Thanks.
Remove the grants for a role to be able to SELECT/UPDATE/INSERT/DELETE against the table(s) involved
Grant EXECUTE on the role for stored procedures/functions/etc
Associate the role to database user(s) you want to secure
It won't stop an account that also has the ability to GRANT access, but it will stop the users associated to the role (assuming no other grants on a per user basis) from being able to execute queries outside of the stored procedure/functions/etc that exist.
There are only a couple ways to do this. OMG Ponies has the best answer: don't allow direct sql statements against your database and instead leverage the tools and security sql server can provide.
An alternative way would be to add an additional tier which all queries would have to go through. In short you'd pass all queries (SOA architecture) to a new app which would evaluate the query for passing on to sql server. I've seen exactly one company do this in reaction to sql injection issues their site had.
Of course, this is a horrible way of doing things because SQL injection is only one potential problem.
Beyond SQL Injection, you also have issues of what happens when the site itself is cracked. Once you can write a new page to a web server it becomes trivial to pass any query you want to the associated database server. This would easily bypass any code level thing you could put in place. And it would allow the attacker to just write select * from ... or truncate table ... Heck, an internal person could potentially just directly connect to the sql server using the sites credentials and run any query they wanted.
The point is, if you leverage the security built into sql server to prevent direct table access then you can control through stored procedures the full range of actions availble to anyone attempting to connect to the server.
And how do you want to check for that? Queries sometimes have constant values that would just as easy be added to the query. For instance, I have a database that is prepared to be multi lingual, but not all code is, so my query looks like this:
SELECT NAME FROM SOMETABLE WHERE ID = :ID AND LANGUAGEID = 1
The ID is a parameter, but the language id isn't. Should this query be blocked?
You ask to block queries that don't use named parameters. That can be easily enforced. Just block any query that doesn't specify any parameters. You can do this in your application layer. But it will be hard to block queries like the one above, where one value is a parameter and the other one isn't. You'll need to parse that query to detect it, and it will be hard too.
I don't think sql server has any built in features to do this.
I'm using SQL Report Builder and I would like to call the stored procedures that were already built. SSRS allows me to do this but since the company's requirement is to build a report model to allow users to do their ad-hoc reports, I need to find a way to include these stored procs written to populate some of the tables.
Can anyone please help me with this?
Thanks.
When you use SSRS to create a Report Model project, you create a data source and then a Data Source view (DSV) which is a restricted list of tables & views available for the report model.
The idea is to create a limited set of views for the Report Model so that report creation is simple and unambiguous for end users. It's not recommended if end users are not going to be the ones creating reports. In practice, Report Builder is not powerful enough for power users, and other users are not going to get a lot of value from it that they couldn't already get from connecting Excel to the datasource and creating pivot tables.
Unfortunately with Report Builder you are limited to tables & views.
This immediately means that complex reports should not use Report Builder. Anything fancy is not going to be easy to reproduce in a view because they don't take parameters.
If it definitely required, then you need to somehow create a view from those existing Stored procs.
One way is to convert them to use table valued functions (TVF's ) . That is not an easy process because you still then need to incorporate the TVF into a view and it still won't be able to take parameters.
Teo Lachev's book 'Applied Microsoft SQL Server 2008 Reporting Services' lists another workaround on page 312. You can use OPENROWSET to create a named query. This relies on you having already enabled SQL Server for ad hoc distributed queries (server option).
The example he gives:
SELECT a.* FROM OPENROWSET('SQLNCLI', 'Trusted_Connection=yes',
'[AdventureWorks].[dbo].uspGetManagerEmployees 16') AS a
That actually seems like the least amount of work for you.
The best option is to just do the reports in SSRS and reference the stored proc as is.