Testing MS ODBC SQLReturn values - sql

Can these values (returned from calls like SQLGetConnectAttr) be tested for success/failure using standard WinAPI FAILED/SUCCEEDED macros to avoid checking individual values? If not are there special SQL variants of those macros?

I don't think such macro exists. SQLxxx() functions returns various values and it depends from application if retured value indicate success or failure . One example of such value is SQL_NO_DATA_FOUND. Is it success or failure? In my opinion it depends of application and context.

Related

Access dynamic query - Better to build one conditional SQL query or multiple queries with VBA?

I have a Microsoft Access 2010 form with dropboxes and a checkbox which represent certain parameters. I need to run a query with conditions based on these parameters. It should also be a possibility for no criteria from the dropdown boxes and checkbox in order to pull all data.
I have two working ways of implementing this:
I build a query with IIf statements in the WHERE clause, nesting statements until I have accounted for every combination of criteria. I reference the criteria in the SQL logic by using Forms!frmMyFrm!checkbox1 for example or by using a function FormFieldValue(formName,fieldName) which returns the value of a control with the input of the form and control name (This is because of previous issues). I set this query to run with the press of the form's button.
I set a vba sub to run with the press of the button. I check the conditions and set the query SQL to a predetermined SQL string based on the control criteria (referenced in the same way as the previous method). This also involves many If...Else statements, but is a little easier to read than a giant query.
What is the preferred method? Which is more efficient?
I don't believe you would find one way is more efficient over the other, at least not noticeably. For the most part it is simply personal preference.
I generally use VBA and check the value of each dropdown/checkbox and build pieces of the SQL query then put together at the end. The issue that you may run into with this method though is that if you have a large number of dropdowns and checkboxes the code is easy to get "lost" in.
If time to run is very key though you could always use some of the tips How do you test running time of VBA code? to see which way is faster.
After a lot of experimentation, and a bit of new information indicating having a pre-built query is faster than having SQL compiled in VBA, the most efficient and clear solution in the context of Microsoft Access is to build and save a number of dependent queries beforehand.
Essentially, build a string of queries each with an IIf dependent on a different criterium. Then you only need to run the final query. The only case where you would have to incorporate a VBA If...Else is if you need to query something more complicated than SELECT...WHERE(IIf(...)).
This has a few advantages:
The SQL is already compiled in the saved query, speeding things up.
No more getting lost in code:
There is no giant, nearly-impossible-to-edit query with way too many IIfs.
The minimal VBA code is even easier to follow.
At least for me, who's not an expert in SQL, it's convenient that I can often use the MS Access visual query builder for each part.

Should I use a function or a parameter in my report?

SSRS gives you the ability to use parameters:
Alternatively you can actually write your own function within the RDL file:
I am wondering in what situation would one use the capabilities of a function rather than a parameter, since you can implement logic in both>?
For example, MSDN has chosen to code this:
Public Function ChangeWord(ByVal s As String) As String
Dim strBuilder As New System.Text.StringBuilder(s)
If s.Contains("Bike") Then
strBuilder.Replace("Bike", "Bicycle")
Return strBuilder.ToString()
Else : Return s
End If
End Function
I can just as well create an IIF statement within a parameter and do the same.
I have been working with SSRS for years and never used a function(vba). I think it is better to use parameters. My suggestion is based on the following reasons....
SSRS is a tool designed to present data. Data manipulation is best handled on sql server.
Using parameters also allow you to do data manipulation closer to data source. Only bring data that is actually needed by the report. Bringing data to SSRS and then filtering out using functions will obviously involve unnecessary data processing.
Code maintenance is easier when you have all the code in one place. (stored procedures in sql server, functions in ssrs reports).
why redo the work that has already been taken care for you. The example of function you have shown, can easily be replaced by using the sql-server's built-in replace function (again sql-server will handle this much better and quicker than ssrs).
and the list goes on.... as they say keep it simple, try to make full use of the built-in functionality of sql server and ssrs and avoid writing unnecessary code.

Performance comparison for SWITCH vs IIF?

Background: migrating legendary Excel Reporting projects into MS Access for the very need of database structure.
Limitation: Development environment is within Excel and Access. Due to large chunks of data processing and keeping of moving n-Months data, it can't afford to loop via recordsets.
Issue: In current Excel reporting platform, there are multi number of sub tools are used separately to process data. As a result ithas the priviledge to process small chunks of data by going through each row. Conditional checks are performed using IF-ELSE.
In proposed MS Access structure, these IF-ELSE are converted into IIF. Given theis situation, would like to know opinions on performance comparison of IIF vs SWITCH. Any better solutions are most welcome.
PS: after importing source, the db is auto closed to compact and repair as it "bloats". eventually db should be compatible for both ms 2003 and 2010 packages
I think that may be here is what you want:
Microsoft
The Iif function returns one of two values depending on whether the expression is true or not. The following expression uses the Iif
function to return a Boolean value of True if the value of LineTotal
exceeds 100. Otherwise it returns False.
The Switch function is useful when you have three or more conditions
to test. The Switch function returns the value associated with the
first expression in a series that evaluates to true.
I think that Switch is a VBA function (like Nz) while Iif is an SQL one. That has a lot of implications that you can explore here, on Allen Browne's site.
Those VBA function should specially be avoided in queries when working in a client-server architecture (SQL Server/Oracle or similar backend).

Gnome's libgda and SQL injections

I'um using Gnome Data Access (libgda) to access a database in a C program.
I use the GdaSqlBuilder to build my queries.
Here is an exemple code for adding an equal condition on a field for a request :
GdaSqlBuilderId add_equal_condition(char* m_name, GValue* m_value)
{
GdaSqlBuilderId name, value, condition;
name = gda_sql_builder_add_id(builder, m_name);
value = gda_sql_builder_add_expr_value(builder, NULL, m_value);
condition = gda_sql_builder_add_cond(builder, GDA_SQL_OPERATOR_TYPE_EQUAL, name, value, 0);
return condition;
}
Does libgda protect itself against SQL injections or do I need to sanitize the input myself before I pass it to GDA ?
Thanks in advance for your answers.
This is explained in the foreword:
When creating an SQL string which contains values (literals), one can
be tempted (as it is the easiest solution) to create a string
containing the values themselves, execute that statement and apply the
same process the next time the same statement needs to be executed
with different values. This approach has two major flaws outlined
below which is why Libgda recommends using variables in statements
(also known as parameters or place holders) and reusing the same
GdaStatement object when only the variable's values change.
https://developer.gnome.org/libgda/unstable/ch06s03.html
Even if the current version is not vulnerable, that does not mean that every future version will not be vulnerable. You should always, without any exception, take care of what a user provides.
Same goes for interfaces from other systems of any kind. This is not limited to SQLi and not a question of SQLi or the libraries you use. You are responsible that a user can only enter the kind data that you want him/her to enter or reject it otherwise. You can not rely on other code to do that for you.
Generally: Nothing can protect itself completly against a certain type of attack. It will always be limited to the attackvectors known at the time of writing.

Possible to spy/mock Sql Server User Defined Functions?

Is it possible to mock/spy functions with T-SQL? I couldn't find anything mentioning it. I was thinking of creating my own implementation using the SpyProcedure as a guideline (if no implementation exists). Anyone had any success with this?
Thanks.
In SQL Server functions cannot have side-effects. That means, in your test you can replace the inner function with on that returns a fixed result, but there is no way to record the parameters that were past into the function.
There is one exception: If the function returns a string and the string does not have to follow a specific format, you could concatenate the passed-in parameters and then assert later on that the value coming back out contained all the correct values, but that is a very special case and not generally possible.
To fake a function, just drop or rename the original and create your own within the test. I would put this code into a helper function, as it probably will be called from more than one test.