Background: migrating legendary Excel Reporting projects into MS Access for the very need of database structure.
Limitation: Development environment is within Excel and Access. Due to large chunks of data processing and keeping of moving n-Months data, it can't afford to loop via recordsets.
Issue: In current Excel reporting platform, there are multi number of sub tools are used separately to process data. As a result ithas the priviledge to process small chunks of data by going through each row. Conditional checks are performed using IF-ELSE.
In proposed MS Access structure, these IF-ELSE are converted into IIF. Given theis situation, would like to know opinions on performance comparison of IIF vs SWITCH. Any better solutions are most welcome.
PS: after importing source, the db is auto closed to compact and repair as it "bloats". eventually db should be compatible for both ms 2003 and 2010 packages
I think that may be here is what you want:
Microsoft
The Iif function returns one of two values depending on whether the expression is true or not. The following expression uses the Iif
function to return a Boolean value of True if the value of LineTotal
exceeds 100. Otherwise it returns False.
The Switch function is useful when you have three or more conditions
to test. The Switch function returns the value associated with the
first expression in a series that evaluates to true.
I think that Switch is a VBA function (like Nz) while Iif is an SQL one. That has a lot of implications that you can explore here, on Allen Browne's site.
Those VBA function should specially be avoided in queries when working in a client-server architecture (SQL Server/Oracle or similar backend).
Related
I have a Microsoft Access 2010 form with dropboxes and a checkbox which represent certain parameters. I need to run a query with conditions based on these parameters. It should also be a possibility for no criteria from the dropdown boxes and checkbox in order to pull all data.
I have two working ways of implementing this:
I build a query with IIf statements in the WHERE clause, nesting statements until I have accounted for every combination of criteria. I reference the criteria in the SQL logic by using Forms!frmMyFrm!checkbox1 for example or by using a function FormFieldValue(formName,fieldName) which returns the value of a control with the input of the form and control name (This is because of previous issues). I set this query to run with the press of the form's button.
I set a vba sub to run with the press of the button. I check the conditions and set the query SQL to a predetermined SQL string based on the control criteria (referenced in the same way as the previous method). This also involves many If...Else statements, but is a little easier to read than a giant query.
What is the preferred method? Which is more efficient?
I don't believe you would find one way is more efficient over the other, at least not noticeably. For the most part it is simply personal preference.
I generally use VBA and check the value of each dropdown/checkbox and build pieces of the SQL query then put together at the end. The issue that you may run into with this method though is that if you have a large number of dropdowns and checkboxes the code is easy to get "lost" in.
If time to run is very key though you could always use some of the tips How do you test running time of VBA code? to see which way is faster.
After a lot of experimentation, and a bit of new information indicating having a pre-built query is faster than having SQL compiled in VBA, the most efficient and clear solution in the context of Microsoft Access is to build and save a number of dependent queries beforehand.
Essentially, build a string of queries each with an IIf dependent on a different criterium. Then you only need to run the final query. The only case where you would have to incorporate a VBA If...Else is if you need to query something more complicated than SELECT...WHERE(IIf(...)).
This has a few advantages:
The SQL is already compiled in the saved query, speeding things up.
No more getting lost in code:
There is no giant, nearly-impossible-to-edit query with way too many IIfs.
The minimal VBA code is even easier to follow.
At least for me, who's not an expert in SQL, it's convenient that I can often use the MS Access visual query builder for each part.
SSRS gives you the ability to use parameters:
Alternatively you can actually write your own function within the RDL file:
I am wondering in what situation would one use the capabilities of a function rather than a parameter, since you can implement logic in both>?
For example, MSDN has chosen to code this:
Public Function ChangeWord(ByVal s As String) As String
Dim strBuilder As New System.Text.StringBuilder(s)
If s.Contains("Bike") Then
strBuilder.Replace("Bike", "Bicycle")
Return strBuilder.ToString()
Else : Return s
End If
End Function
I can just as well create an IIF statement within a parameter and do the same.
I have been working with SSRS for years and never used a function(vba). I think it is better to use parameters. My suggestion is based on the following reasons....
SSRS is a tool designed to present data. Data manipulation is best handled on sql server.
Using parameters also allow you to do data manipulation closer to data source. Only bring data that is actually needed by the report. Bringing data to SSRS and then filtering out using functions will obviously involve unnecessary data processing.
Code maintenance is easier when you have all the code in one place. (stored procedures in sql server, functions in ssrs reports).
why redo the work that has already been taken care for you. The example of function you have shown, can easily be replaced by using the sql-server's built-in replace function (again sql-server will handle this much better and quicker than ssrs).
and the list goes on.... as they say keep it simple, try to make full use of the built-in functionality of sql server and ssrs and avoid writing unnecessary code.
Yesterday we got a scenario where had to get type of a db field and on base of that we had to write the description of the field. Like
Select ( Case DB_Type When 'I' Then 'Intermediate'
When 'P' Then 'Pending'
Else 'Basic'
End)
From DB_table
I suggested to write a db function instead of this case statement because that would be more reusable. Like
Select dbo.GetTypeName(DB_Type)
from DB_table
The interesting part is, One of our developer said using database function will be inefficient as database functions are slower than Case statement. I searched over the internet to find the answer which is better approach in terms of efficiency but unfortunately I found nothing that could be considered satisfied answer. Please enlighten me with your thoughts, which approach is better?
UDF function is always slower than case statements
Please refer the article
http://blogs.msdn.com/b/sqlserverfaq/archive/2009/10/06/performance-benefits-of-using-expression-over-user-defined-functions.aspx
The following article suggests you when to use UDF
http://www.sql-server-performance.com/2005/sql-server-udfs/
Summary :
There is a large performance penalty paid when User defined functions is used.This penalty shows up as poor query execution time when a query applies a UDF to a large number of rows, typically 1000 or more. The penalty is incurred because the SQL Server database engine must create its own internal cursor like processing. It must invoke each UDF on each row. If the UDF is used in the WHERE clause, this may happen as part of the filtering the rows. If the UDF is used in the select list, this happens when creating the results of the query to pass to the next stage of query processing.
It's the row by row processing that slows SQL Server the most.
When using a scalar function (a function that returns one value) the contents of the function will be executed once per row but the case statement will be executed across the entire set.
By operating against the entire set you allow the server to optimise your query more efficiently.
So the theory goes that the same query run both ways against a large dataset then the function should be slower. However, the difference may be trivial when operating against your data so you should try both methods and test them to determine if any performance trade off is worth the increased utility of a function.
Your devolper is right. Functions will slow down your query.
https://sqlserverfast.com/?s=user+defined+ugly
Calling functionsis like:
wrap parts into paper
put it into a bag
carry it to the mechanics
let him unwrap, do something, wrapt then result
carry it back
use it
Can these values (returned from calls like SQLGetConnectAttr) be tested for success/failure using standard WinAPI FAILED/SUCCEEDED macros to avoid checking individual values? If not are there special SQL variants of those macros?
I don't think such macro exists. SQLxxx() functions returns various values and it depends from application if retured value indicate success or failure . One example of such value is SQL_NO_DATA_FOUND. Is it success or failure? In my opinion it depends of application and context.
We're currently investigating the load against our SQL server and looking at ways to alleviate it. During my post-secondary education, I was always told that, from a performance standpoint, it was cheaper to make SQL Server do the work. But is this true?
Here's an example:
SELECT ord_no FROM oelinhst_sql
This returns 783119 records in 14 seconds. The field is a char(8), but all of our order numbers are six-digits long so each has two blank characters leading. We typically trim this field, so I ran the following test:
SELECT LTRIM(ord_no) FROM oelinhst_sql
This returned the 783119 records in 13 seconds. I also tried one more test:
SELECT LTRIM(RTRIM(ord_no)) FROM oelinhst_sql
There is nothing to trim on the right, but I was trying to see if there was any overhead in the mere act of calling the function, but it still returned in 13 seconds.
My manager was talking about moving things like string trimming out of the SQL and into the source code, but the test results suggest otherwise. My manager also says he heard somewhere that using SQL functions meant that indexes would not be used. Is there any truth to this either?
Only optimize code that you have proven to be the slowest part of your system. Your data so far indicates that SQL string manipulation functions are not effecting performance at all. take this data to your manager.
If you use a function or type cast in the WHERE clause it can often prevent the SQL server from using indexes. This does not apply to transforming returned columns with functions.
It's typically user defined functions (UDFs) that get a bad rap with regards to SQL performance and might be the source of the advice you're getting.
The reason for this is you can build some pretty hairy functions that cause massive overhead with exponential effect.
As you've found with rtrim and ltrim this isn't a blanket reason to stop using all functions on the sql side.
It somewhat depends on what all is encompassed by: "things like string trimming", but, for string trimming at least, I'd definitely let the database do that (there will be less network traffic as well). As for the indexes, they will still be used if you're where clause is just using the column itself (as opposed to a function of the column). Use of the indexes won't be affected whatsoever by using functions on the actual columns you're retrieving (just on how you're selecting the rows).
You may want to have a look at this for performance improvement suggestions: http://net.tutsplus.com/tutorials/other/top-20-mysql-best-practices/
As I said in my comment, reduce the data read per query and you will get a speed increase.
You said:
our order numbers are six-digits long
so each has two blank characters
leading
Makes me think you are storing numbers in a string, if so why are you not using a numeric data type? The smallest numeric type which will take 6 digits is an INT (I'm assuming SQL Server) and that already saves you 4 bytes per order number, over the number of rows you mention that's quite a lot less data to read off disk and send over the network.
Fully optimise your database before looking to deal with the data outside of it; it's what a database server is designed to do, serve data.
As you found it often pays to measure but I what I think your manager may have been referring to is somthing like this.
This is typically much faster
SELECT SomeFields FROM oelinhst_sql
WHERE
datetimeField > '1/1/2011'
and
datetimeField < '2/1/2011'
than this
SELECT SomeFields FROM oelinhst_sql
WHERE
Month(datetimeField) = 1
and
year(datetimeField) = 2011
even though the rows that are returned are the same