First of all I'm a beginner in unit tests. For my tests i want to use NSubstitute, so I read the tutorial on the website and also the mock comparison from Richard Banks. Both of them are testing against interfaces, not against classes. The statement is "Generally this [substituted] type will be an interface, but you can also substitute classes in cases of emergency."
Now I'm wondering about the purpose of testing against interfaces. Here is the example interface from the NSubstitute website (please note, that i have converted the C#-code in VB.net):
Public Interface ICalculator
Function Add(a As Double, b As Double) As Double
Property Mode As String
Event PoweringUp As EventHandler
End Interface
And here is the unit test from the website (under the NUnit-Framework):
<Test>
Sub ReturnValue_For_Methods()
Dim calculator = Substitute.For(Of ICalculator)()
calculator.Add(1, 2).Returns(3)
Assert.AreEqual(calculator.Add(1, 2), 3)
End Sub
Ok, that works and the unit test will perform successful. But what sense makes this? This do not test any code. The Add-Method could have any errors, which will not be detected when testing against interfaces - like this:
Public Class Calculator
Implements ICalculator
Public Function Add(a As Double, b As Double) As Double Implements ICalculator.Add
Return 1 / 0
End Function
...
End Class
The Add-Method performs a division by zero, so the unit test should fail - but because of testing against the interface ICalculator the test is successful.
Could you please help me to understand that? What sense makes it, not to test the code but the interface?
Thanks in advance
Michael
The idea behind mocking is to isolate a class we are testing from its dependencies. So we don't mock the class we are testing, in this case Calculator, we mock an ICalculator when testing a class that uses an ICalculator.
A small example is when we want to test how something interacts with a database, but we don't want to use a real database for some quick tests. (Please excuse the C#.)
[Test]
public void SaveTodoItemToDatabase() {
var substituteDb = Substitute.For<IDatabase>();
var todoScreen = new TodoViewModel(substituteDb);
todoScreen.Item = "Read StackOverflow";
todoScreen.CurrentUser = "Anna";
todoScreen.Save();
substituteDb.Received().SaveTodo("Read StackOverflow", "Anna");
}
The idea here is we've separated the TodoViewModel from the details of saving to the database. We don't want to worry about configuring a database, or getting a connection string, or having data from previous test runs interfering with future tests runs. Testing with a real database can be very valuable, but in some cases we just want to test a smaller unit of functionality. Mocking is one way of doing this.
For the real app, we'll create a TodoViewModel with a real implementation of IDatabase, and provided that implementation follows the expected contract of the interface then we can have a reasonable expectation that it will work.
Hope this helps.
Update in response to comment
The test for TodoViewModel assumes the implementation of the IDatabase works so we can focus on that class' logic. This means we'll probably want a separate set of tests for implementations of IDatabase. Say we have a SqlServerDbimplementation, then we can have some tests (probably against a real database) that check it does what it promises. In those tests we'll no longer be mocking the database interface, because that's what we're testing.
Another thing we can do is have "contract tests" which we can apply to any IDatabase implementation. For example, we could have a test that says for any implementation, saving an item then loading it up again should return the same item. We can then run those tests against all implementations, SqlDb, InMemoryDb, FileDb etc. In this way we can state our assumptions about the dependencies we're mocking, then check that the actual implementations meet our assumptions.
I'm (somewhat) new to DI and am trying to understand how/why it's used in the codebase I am maintaining. I have found a series of classes that map data from stored procedure calls to domain objects. For example:
Public Sub New(domainFactory As IDomainFactory)
_domainFactory = domainFactory
End Sub
Protected Overrides Function MapFromRow(row As DataRow) As ISomeDomainObject
Dim domainObject = _domainFactory.CreateSomeDomainObject()
' Populate the object
domainObject.ID = CType(row("id"), Integer)
domainObject.StartDate = CType(row("date"), Date)
domainObject.Active = CType(row("enabled"), Boolean)
Return domainObject
End Function
The IDomainFactory is injected with Spring.Net. It's implementation simply has a series of methods that return new instances of the various domain objects. eg:
Public Function CreateSomeDomainObject() As ISomeDomainObject
Return New SomeDomainObject()
End Function
All of the above code strikes me as worse than useless. It's hard to follow and does nothing of value. Additionally, as far as I can tell it's a misuse of DI as it's not really intended for local variables. Furthermore, we don't need more than one implementation of the domain objects, we don't do any unit testing and if we did we still wouldn't mock the domain objects. As you can see from the above code, any changes to the domain object would be the result of changes to the SP, which would mean the MapFromRow method would have to be edited anyway.
That said, should I rip this nonsense out or is it doing something amazingly awesome and I'm missing it?
The idea behind dependency injection is to make a class (or another piece of code) independent on a specific implementation of an interface. The class outlined above does not know which class implements IDomainFactory. A concrete implementation is injected through its constructor and later used by the method MapFromRow. The domain factory returns an implementation of ISomeDomainObject which is also unknown.
This allows you supplement another implementation of these interfaces without having to change the class shown above. This is very practical for unit tests. Let's assume that you have an interface IDatabase that defines a method GetCustomerByID. It is difficult to test code that uses this method, since it depends on specific customer records in the database. Now you can inject a dummy database class that returns customers generated by code that does not access a physical database. Such a dummy class is often called a mock object.
See Dependency injection on Wikipedia.
I have an object called Parameters that gets tossed from method to method down and up the call tree, across package boundaries. It has about fifty state variables. Each method might use one or two variables to control its output.
I think this is a bad idea, beacuse I can't easily see what a method needs to function, or even what might happen if with a certain combination of parameters for module Y which is totally unrelated to my current module.
What are some good techniques for decreasing coupling to this god object, or ideally eliminating it ?
public void ExporterExcelParFonds(ParametresExecution parametres)
{
ApplicationExcel appExcel = null;
LogTool.Instance.ExceptionSoulevee = false;
bool inclureReferences = parametres.inclureReferences;
bool inclureBornes = parametres.inclureBornes;
DateTime dateDebut = parametres.date;
DateTime dateFin = parametres.dateFin;
try
{
LogTool.Instance.AfficherMessage(Variables.msg_GenerationRapportPortefeuilleReference);
bool fichiersPreparesAvecSucces = PreparerFichiers(parametres, Sections.exportExcelParFonds);
if (!fichiersPreparesAvecSucces)
{
parametres.afficherRapportApresGeneration = false;
LogTool.Instance.ExceptionSoulevee = true;
}
else
{
The caller would do :
PortefeuillesReference pr = new PortefeuillesReference();
pr.ExporterExcelParFonds(parametres);
First, at the risk of stating the obvious: pass the parameters which are used by the methods, rather than the god object.
This, however, might lead to some methods needing huge amounts of parameters because they call other methods, which call other methods in turn, etcetera. That was probably the inspiration for putting everything in a god object. I'll give a simplified example of such a method with too many parameters; you'll have to imagine that "too many" == 3 here :-)
public void PrintFilteredReport(
Data data, FilterCriteria criteria, ReportFormat format)
{
var filteredData = Filter(data, criteria);
PrintReport(filteredData, format);
}
So the question is, how can we reduce the amount of parameters without resorting to a god object? The answer is to get rid of procedural programming and make good use of object oriented design. Objects can use each other without needing to know the parameters that were used to initialize their collaborators:
// dataFilter service object only needs to know the criteria
var dataFilter = new DataFilter(criteria);
// report printer service object only needs to know the format
var reportPrinter = new ReportPrinter(format);
// filteredReportPrinter service object is initialized with a
// dataFilter and a reportPrinter service, but it doesn't need
// to know which parameters those are using to do their job
var filteredReportPrinter = new FilteredReportPrinter(dataFilter, reportPrinter);
Now the FilteredReportPrinter.Print method can be implemented with only one parameter:
public void Print(data)
{
var filteredData = this.dataFilter.Filter(data);
this.reportPrinter.Print(filteredData);
}
Incidentally, this sort of separation of concerns and dependency injection is good for more than just eliminating parameters. If you access collaborator objects through interfaces, then that makes your class
very flexible: you can set up FilteredReportPrinter with any filter/printer implementation you can imagine
very testable: you can pass in mock collaborators with canned responses and verify that they were used correctly in a unit test
If all your methods are using the same Parameters class then maybe it should be a member variable of a class with the relevant methods in it, then you can pass Parameters into the constructor of this class, assign it to a member variable and all your methods can use it with having to pass it as a parameter.
A good way to start refactoring this god class is by splitting it up into smaller pieces. Find groups of properties that are related and break them out into their own class.
You can then revisit the methods that depend on Parameters and see if you can replace it with one of the smaller classes you created.
Hard to give a good solution without some code samples and real world situations.
It sounds like you are not applying object-oriented (OO) principles in your design. Since you mention the word "object" I presume you are working within some sort of OO paradigm. I recommend you convert your "call tree" into objects that model the problem you are solving. A "god object" is definitely something to avoid. I think you may be missing something fundamental... If you post some code examples I may be able to answer in more detail.
Query each client for their required parameters and inject them?
Example: each "object" that requires "parameters" is a "Client". Each "Client" exposes an interface through which a "Configuration Agent" queries the Client for its required parameters. The Configuration Agent then "injects" the parameters (and only those required by a Client).
For the parameters that dictate behavior, one can instantiate an object that exhibits the configured behavior. Then client classes simply use the instantiated object - neither the client nor the service have to know what the value of the parameter is. For instance for a parameter that tells where to read data from, have the FlatFileReader, XMLFileReader and DatabaseReader all inherit the same base class (or implement the same interface). Instantiate one of them base on the value of the parameter, then clients of the reader class just ask for data to the instantiated reader object without knowing if the data come from a file or from the DB.
To start you can break your big ParametresExecution class into several classes, one per package, which only hold the parameters for the package.
Another direction could be to pass the ParametresExecution object at construction time. You won't have to pass it around at every function call.
(I am assuming this is within a Java or .NET environment) Convert the class into a singleton. Add a static method called "getInstance()" or something similar to call to get the name-value bundle (and stop "tramping" it around -- see Ch. 10 of "Code Complete" book).
Now the hard part. Presumably, this is within a web app or some other non batch/single-thread environment. So, to get access to the right instance when the object is not really a true singleton, you have to hide selection logic inside of the static accessor.
In java, you can set up a "thread local" reference, and initialize it when each request or sub-task starts. Then, code the accessor in terms of that thread-local. I don't know if something analogous exists in .NET, but you can always fake it with a Dictionary (Hash, Map) which uses the current thread instance as the key.
It's a start... (there's always decomposition of the blob itself, but I built a framework that has a very similar semi-global value-store within it)
I am learning linq to sql and I am trying to setup a lookup combo box in my web page. I was using a linq data source but I then moved my linqtosql code to it's own class library for my DAL dll (took out of app_code folder). So, I am converting the page code to be able to still have lookups driven now by a biz object.
Here's what I have done in my biz layer...
Public Class KosherTypes
Public Shared Function GetKosherTypes() As List(Of KosherTypeLookup)
Dim db As New DBDataContext
Dim types = From kt In db.KosherTypes _
Where kt.IsDeleted = False _
Order By kt.Name _
Select New KosherTypeLookup With {.Name = kt.Name, .ID = kt.KosherTypeID}
Return types.ToList
End Function
End Class
I then setup an object data source and mapped it to this class.
I have a few questions as when I was doing internet searches I didn't find anyone that seemed to be doing this and yet I know lookup tables / combo boxes are common...
Did I totally just miss something and there are better way(s) to do this?
I went with returning a list but I could have returned an IEnumerable or IQueryable. In my reading it seemed IQueryable had more functionality for linq to sql but since I am returning a simple two column list then I only need to return an IEnumerable or List. I went with a List since it's strongly typed. Did I make a good decision? AKA - Should I just have returned and IEnumerable or perhaps gone with IQueryable?
Thanks!
I'll answer in reverse order:
I wouldn't use IQueryable outside of your repos / DAL for the simple reason that since execution is deferred, you lose control of what exactly is executed (i.e., an aribtrary function could be assigned as a delegate for WHERE), making maintenance and testing much harder. I don't see an issue with you returning an IEnumberable(Of KosherTypeLookup) though.
If the lookup is a static lookup that never or rarely changes, you might want to look at a way to cache the lookup after the first use, to avoid having to hit the db every time that box is called. It really depends on your expected load, though. As always, performance/load testing will highlight where you need to optimize.
We’ve found that the unit tests we’ve written for our C#/C++ code have really paid off.
But we still have thousands of lines of business logic in stored procedures, which only really get tested in anger when our product is rolled out to a large number of users.
What makes this worse is that some of these stored procedures end up being very long, because of the performance hit when passing temporary tables between SPs. This has prevented us from refactoring to make the code simpler.
We have made several attempts at building unit tests around some of our key stored procedures (primarily testing the performance), but have found that setting up the test data for these tests is really hard. For example, we end up copying around test databases. In addition to this, the tests end up being really sensitive to change, and even the smallest change to a stored proc. or table requires a large amount of changes to the tests. So after many builds breaking due to these database tests failing intermittently, we’ve just had to pull them out of the build process.
So, the main part of my questions is: has anyone ever successfully written unit tests for their stored procedures?
The second part of my questions is whether unit testing would be/is easier with linq?
I was thinking that rather than having to set up tables of test data, you could simply create a collection of test objects, and test your linq code in a “linq to objects” situation? (I am a totally new to linq so don’t know if this would even work at all)
I ran into this same issue a while back and found that if I created a simple abstract base class for data access that allowed me to inject a connection and transaction, I could unit test my sprocs to see if they did the work in SQL that I asked them to do and then rollback so none of the test data is left in the db.
This felt better than the usual "run a script to setup my test db, then after the tests run do a cleanup of the junk/test data". This also felt closer to unit testing because these tests could be run alone w/out having a great deal of "everything in the db needs to be 'just so' before I run these tests".
Here is a snippet of the abstract base class used for data access
Public MustInherit Class Repository(Of T As Class)
Implements IRepository(Of T)
Private mConnectionString As String = ConfigurationManager.ConnectionStrings("Northwind.ConnectionString").ConnectionString
Private mConnection As IDbConnection
Private mTransaction As IDbTransaction
Public Sub New()
mConnection = Nothing
mTransaction = Nothing
End Sub
Public Sub New(ByVal connection As IDbConnection, ByVal transaction As IDbTransaction)
mConnection = connection
mTransaction = transaction
End Sub
Public MustOverride Function BuildEntity(ByVal cmd As SqlCommand) As List(Of T)
Public Function ExecuteReader(ByVal Parameter As Parameter) As List(Of T) Implements IRepository(Of T).ExecuteReader
Dim entityList As List(Of T)
If Not mConnection Is Nothing Then
Using cmd As SqlCommand = mConnection.CreateCommand()
cmd.Transaction = mTransaction
cmd.CommandType = Parameter.Type
cmd.CommandText = Parameter.Text
If Not Parameter.Items Is Nothing Then
For Each param As SqlParameter In Parameter.Items
cmd.Parameters.Add(param)
Next
End If
entityList = BuildEntity(cmd)
If Not entityList Is Nothing Then
Return entityList
End If
End Using
Else
Using conn As SqlConnection = New SqlConnection(mConnectionString)
Using cmd As SqlCommand = conn.CreateCommand()
cmd.CommandType = Parameter.Type
cmd.CommandText = Parameter.Text
If Not Parameter.Items Is Nothing Then
For Each param As SqlParameter In Parameter.Items
cmd.Parameters.Add(param)
Next
End If
conn.Open()
entityList = BuildEntity(cmd)
If Not entityList Is Nothing Then
Return entityList
End If
End Using
End Using
End If
Return Nothing
End Function
End Class
next you will see a sample data access class using the above base to get a list of products
Public Class ProductRepository
Inherits Repository(Of Product)
Implements IProductRepository
Private mCache As IHttpCache
'This const is what you will use in your app
Public Sub New(ByVal cache As IHttpCache)
MyBase.New()
mCache = cache
End Sub
'This const is only used for testing so we can inject a connectin/transaction and have them roll'd back after the test
Public Sub New(ByVal cache As IHttpCache, ByVal connection As IDbConnection, ByVal transaction As IDbTransaction)
MyBase.New(connection, transaction)
mCache = cache
End Sub
Public Function GetProducts() As System.Collections.Generic.List(Of Product) Implements IProductRepository.GetProducts
Dim Parameter As New Parameter()
Parameter.Type = CommandType.StoredProcedure
Parameter.Text = "spGetProducts"
Dim productList As List(Of Product)
productList = MyBase.ExecuteReader(Parameter)
Return productList
End Function
'This function is used in each class that inherits from the base data access class so we can keep all the boring left-right mapping code in 1 place per object
Public Overrides Function BuildEntity(ByVal cmd As System.Data.SqlClient.SqlCommand) As System.Collections.Generic.List(Of Product)
Dim productList As New List(Of Product)
Using reader As SqlDataReader = cmd.ExecuteReader()
Dim product As Product
While reader.Read()
product = New Product()
product.ID = reader("ProductID")
product.SupplierID = reader("SupplierID")
product.CategoryID = reader("CategoryID")
product.ProductName = reader("ProductName")
product.QuantityPerUnit = reader("QuantityPerUnit")
product.UnitPrice = reader("UnitPrice")
product.UnitsInStock = reader("UnitsInStock")
product.UnitsOnOrder = reader("UnitsOnOrder")
product.ReorderLevel = reader("ReorderLevel")
productList.Add(product)
End While
If productList.Count > 0 Then
Return productList
End If
End Using
Return Nothing
End Function
End Class
And now in your unit test you can also inherit from a very simple base class that does your setup / rollback work - or keep this on a per unit test basis
below is the simple testing base class I used
Imports System.Configuration
Imports System.Data
Imports System.Data.SqlClient
Imports Microsoft.VisualStudio.TestTools.UnitTesting
Public MustInherit Class TransactionFixture
Protected mConnection As IDbConnection
Protected mTransaction As IDbTransaction
Private mConnectionString As String = ConfigurationManager.ConnectionStrings("Northwind.ConnectionString").ConnectionString
<TestInitialize()> _
Public Sub CreateConnectionAndBeginTran()
mConnection = New SqlConnection(mConnectionString)
mConnection.Open()
mTransaction = mConnection.BeginTransaction()
End Sub
<TestCleanup()> _
Public Sub RollbackTranAndCloseConnection()
mTransaction.Rollback()
mTransaction.Dispose()
mConnection.Close()
mConnection.Dispose()
End Sub
End Class
and finally - the below is a simple test using that test base class that shows how to test the entire CRUD cycle to make sure all the sprocs do their job and that your ado.net code does the left-right mapping correctly
I know this doesn't test the "spGetProducts" sproc used in the above data access sample, but you should see the power behind this approach to unit testing sprocs
Imports SampleApplication.Library
Imports System.Collections.Generic
Imports Microsoft.VisualStudio.TestTools.UnitTesting
<TestClass()> _
Public Class ProductRepositoryUnitTest
Inherits TransactionFixture
Private mRepository As ProductRepository
<TestMethod()> _
Public Sub Should-Insert-Update-And-Delete-Product()
mRepository = New ProductRepository(New HttpCache(), mConnection, mTransaction)
'** Create a test product to manipulate throughout **'
Dim Product As New Product()
Product.ProductName = "TestProduct"
Product.SupplierID = 1
Product.CategoryID = 2
Product.QuantityPerUnit = "10 boxes of stuff"
Product.UnitPrice = 14.95
Product.UnitsInStock = 22
Product.UnitsOnOrder = 19
Product.ReorderLevel = 12
'** Insert the new product object into SQL using your insert sproc **'
mRepository.InsertProduct(Product)
'** Select the product object that was just inserted and verify it does exist **'
'** Using your GetProductById sproc **'
Dim Product2 As Product = mRepository.GetProduct(Product.ID)
Assert.AreEqual("TestProduct", Product2.ProductName)
Assert.AreEqual(1, Product2.SupplierID)
Assert.AreEqual(2, Product2.CategoryID)
Assert.AreEqual("10 boxes of stuff", Product2.QuantityPerUnit)
Assert.AreEqual(14.95, Product2.UnitPrice)
Assert.AreEqual(22, Product2.UnitsInStock)
Assert.AreEqual(19, Product2.UnitsOnOrder)
Assert.AreEqual(12, Product2.ReorderLevel)
'** Update the product object **'
Product2.ProductName = "UpdatedTestProduct"
Product2.SupplierID = 2
Product2.CategoryID = 1
Product2.QuantityPerUnit = "a box of stuff"
Product2.UnitPrice = 16.95
Product2.UnitsInStock = 10
Product2.UnitsOnOrder = 20
Product2.ReorderLevel = 8
mRepository.UpdateProduct(Product2) '**using your update sproc
'** Select the product object that was just updated to verify it completed **'
Dim Product3 As Product = mRepository.GetProduct(Product2.ID)
Assert.AreEqual("UpdatedTestProduct", Product2.ProductName)
Assert.AreEqual(2, Product2.SupplierID)
Assert.AreEqual(1, Product2.CategoryID)
Assert.AreEqual("a box of stuff", Product2.QuantityPerUnit)
Assert.AreEqual(16.95, Product2.UnitPrice)
Assert.AreEqual(10, Product2.UnitsInStock)
Assert.AreEqual(20, Product2.UnitsOnOrder)
Assert.AreEqual(8, Product2.ReorderLevel)
'** Delete the product and verify it does not exist **'
mRepository.DeleteProduct(Product3.ID)
'** The above will use your delete product by id sproc **'
Dim Product4 As Product = mRepository.GetProduct(Product3.ID)
Assert.AreEqual(Nothing, Product4)
End Sub
End Class
I know this is a long example, but it helped to have a reusable class for the data access work, and yet another reusable class for my testing so I didn't have to do the setup/teardown work over and over again ;)
Have you tried DBUnit? It's designed to unit test your database, and just your database, without needing to go through your C# code.
If you think about the kind of code that unit testing tends to promote: small highly-cohesive and lowly-coupled routines, then you should pretty much be able to see where at least part of the problem might be.
In my cynical world, stored procedures are part of the RDBMS world's long-standing attempt to persuade you to move your business processing into the database, which makes sense when you consider that server license costs tend to be related to things like processor count. The more stuff you run inside your database, the more they make from you.
But I get the impression you're actually more concerned with performance, which isn't really the preserve of unit testing at all. Unit tests are supposed to be fairly atomic and are intended to check behaviour rather than performance. And in that case you're almost certainly going to need production-class loads in order to check query plans.
I think you need a different class of testing environment. I'd suggest a copy of production as the simplest, assuming security isn't an issue. Then for each candidate release, you start with the previous version, migrate using your release procedures (which will give those a good testing as a side-effect) and run your timings.
Something like that.
The key to testing stored procedures is writing a script that populates a blank database with data that is planned out in advance to result in consistent behavior when the stored procedures are called.
I have to put my vote in for heavily favoring stored procedures and placing your business logic where I (and most DBAs) think it belongs, in the database.
I know that we as software engineers want beautifully refactored code, written in our favorite language, to contain all of our important logic, but the realities of performance in high volume systems, and the critical nature of data integrity, require us to make some compromises. Sql code can be ugly, repetitive, and hard to test, but I can't imagine the difficulty of tuning a database without having complete control over the design of the queries.
I am often forced to completely redesign queries, to include changes to the data model, to get things to run in an acceptable amount of time. With stored procedures, I can assure that the changes will be transparent to the caller, since a stored procedure provides such excellent encapsulation.
I am assuming that you want unit testing in MSSQL. Looking at DBUnit there are some limitations in it's support for MSSQL. It doesn't support NVarChar for instance. Here are some real users and their problems with DBUnit.
Good question.
I have similar problems, and I have taken the path of least resistance (for me, anyway).
There are a bunch of other solutions, which others have mentionned. Many of them are better / more pure / more appropriate for others.
I was already using Testdriven.NET/MbUnit to test my C#, so I simply added tests to each project to call the stored procedures used by that app.
I know, I know. This sounds terrible, but what I need is to get off the ground with some testing, and go from there. This approach means that although my coverage is low I am testing some stored procs at the same time as I am testing the code which will be calling them. There is some logic to this.
I'm in the exact same situation as the original poster. It comes down to performance versus testability. My bias is towards testability (make it work, make it right, make it fast), which suggests keeping business logic out of the database. Databases not only lack the testing frameworks, code factoring constructs, and code analysis and navigation tools found in languages like Java, but highly factored database code is also slow (where highly factored Java code is not).
However, I do recognize the power of database set processing. When used appropriately, SQL can do some incredibly powerful stuff with very little code. So, I'm ok with some set-based logic living in the database even though I will still do everything I can to unit test it.
On a related note, it seems that very long and procedural database code is often a symptom of something else, and I think such code can be converted to testable code without incurring a performance hit. The theory is that such code often represents batch processes that periodically process large amounts of data. If these batch processes were to be converted into smaller chunks of real-time business logic that runs whenever the input data is changed, this logic could be run on the middle-tier (where it can be tested) without taking a performance hit (since the work is done in small chunks in real-time). As a side-effect, this also eliminates the long feedback-loops of batch process error handling. Of course this approach won't work in all cases, but it may work in some. Also, if there is tons of such untestable batch processing database code in your system, the road to salvation may be long and arduous. YMMV.
But I get the impression you're actually more concerned with performance, which isn't really the preserve of unit testing at all. Unit tests are supposed to be fairly atomic and are intended to check behaviour rather than performance. And in that case you're almost certainly going to need production-class loads in order to check query plans.
I think there are two quite distinct testing areas here: the performance, and the actual logic of the stored procedures.
I gave the example of testing the db performance in the past and, thankfully, we have reached a point where the performance is good enough.
I completely agree that the situation with all the business logic in the database is a bad one, but it's something that we've inherited from before most of our developers joined the company.
However, we're now adopting the web services model for our new features, and we've been trying to avoid stored procedures as much as possible, keeping the logic in the C# code and firing SQLCommands at the database (although linq would now be the preferred method). There is still some use of the existing SPs which was why I was thinking about retrospectively unit testing them.
You can also try Visual Studio for Database Professionals. It's mainly about change management but also has tools for generating test data and unit tests.
It's pretty expensive tho.
We use DataFresh to rollback changes between each test, then testing sprocs is relatively easy.
What is still lacking is code coverage tools.
I do poor man's unit testing. If I'm lazy, the test is just a couple of valid invocations with potentially problematic parameter values.
/*
--setup
Declare #foo int Set #foo = (Select top 1 foo from mytable)
--test
execute wish_I_had_more_Tests #foo
--look at rowcounts/look for errors
If ##rowcount=1 Print 'Ok!' Else Print 'Nokay!'
--Teardown
Delete from mytable where foo = #foo
*/
create procedure wish_I_had_more_Tests
as
select....
LINQ will simplify this only if you remove the logic from your stored procedures and reimplement it as linq queries. Which would be much more robust and easier to test, definitely. However, it sounds like your requirements would preclude this.
TL;DR: Your design has issues.
We unit test the C# code that calls the SPs.
We have build scripts, creating clean test databases.
And bigger ones we attach and detach during test fixture.
These tests could take hours, but I think it`s worth it.
One option to re-factor the code (I'll admit a ugly hack) would be to generate it via CPP (the C preprocessor) M4 (never tried it) or the like. I have a project that is doing just that and it is actually mostly workable.
The only case I think that might be valid for is 1) as an alternative to KLOC+ stored procedures and 2) and this is my cases, when the point of the project is to see how far (into insane) you can push a technology.
Oh, boy. sprocs don't lend themselves to (automated) unit testing. I sort-of "unit test" my complex sprocs by writing tests in t-sql batch files and hand checking the output of the print statements and the results.
The problem with unit testing any kind of data-related programming is that you have to have a reliable set of test data to start with. A lot also depends on the complexity of the stored proc and what it does. It would be very hard to automate unit testing for a very complex procedure that modified many tables.
Some of the other posters have noted some simple ways to automate manually testing them, and also some tools you can use with SQL Server. On the Oracle side, PL/SQL guru Steven Feuerstein worked on a free unit testing tool for PL/SQL stored procedures called utPLSQL.
However, he dropped that effort and then went commercial with Quest's Code Tester for PL/SQL. Quest offers a free downloadable trial version. I'm on the verge of trying it out; my understanding is that it is good at taking care of the overhead in setting up a testing framework so that you can focus on just the tests themselves, and it keeps the tests so you can reuse them in regression testing, one of the great benefits of test-driven-development. In addition, it is supposed to be good at more than just checking an output variable and does have provision for validating data changes, but I still have to take a closer look myself. I thought this info might be of value for Oracle users.