Has anyone had any success in unit testing SQL stored procedures? - sql

We’ve found that the unit tests we’ve written for our C#/C++ code have really paid off.
But we still have thousands of lines of business logic in stored procedures, which only really get tested in anger when our product is rolled out to a large number of users.
What makes this worse is that some of these stored procedures end up being very long, because of the performance hit when passing temporary tables between SPs. This has prevented us from refactoring to make the code simpler.
We have made several attempts at building unit tests around some of our key stored procedures (primarily testing the performance), but have found that setting up the test data for these tests is really hard. For example, we end up copying around test databases. In addition to this, the tests end up being really sensitive to change, and even the smallest change to a stored proc. or table requires a large amount of changes to the tests. So after many builds breaking due to these database tests failing intermittently, we’ve just had to pull them out of the build process.
So, the main part of my questions is: has anyone ever successfully written unit tests for their stored procedures?
The second part of my questions is whether unit testing would be/is easier with linq?
I was thinking that rather than having to set up tables of test data, you could simply create a collection of test objects, and test your linq code in a “linq to objects” situation? (I am a totally new to linq so don’t know if this would even work at all)

I ran into this same issue a while back and found that if I created a simple abstract base class for data access that allowed me to inject a connection and transaction, I could unit test my sprocs to see if they did the work in SQL that I asked them to do and then rollback so none of the test data is left in the db.
This felt better than the usual "run a script to setup my test db, then after the tests run do a cleanup of the junk/test data". This also felt closer to unit testing because these tests could be run alone w/out having a great deal of "everything in the db needs to be 'just so' before I run these tests".
Here is a snippet of the abstract base class used for data access
Public MustInherit Class Repository(Of T As Class)
Implements IRepository(Of T)
Private mConnectionString As String = ConfigurationManager.ConnectionStrings("Northwind.ConnectionString").ConnectionString
Private mConnection As IDbConnection
Private mTransaction As IDbTransaction
Public Sub New()
mConnection = Nothing
mTransaction = Nothing
End Sub
Public Sub New(ByVal connection As IDbConnection, ByVal transaction As IDbTransaction)
mConnection = connection
mTransaction = transaction
End Sub
Public MustOverride Function BuildEntity(ByVal cmd As SqlCommand) As List(Of T)
Public Function ExecuteReader(ByVal Parameter As Parameter) As List(Of T) Implements IRepository(Of T).ExecuteReader
Dim entityList As List(Of T)
If Not mConnection Is Nothing Then
Using cmd As SqlCommand = mConnection.CreateCommand()
cmd.Transaction = mTransaction
cmd.CommandType = Parameter.Type
cmd.CommandText = Parameter.Text
If Not Parameter.Items Is Nothing Then
For Each param As SqlParameter In Parameter.Items
cmd.Parameters.Add(param)
Next
End If
entityList = BuildEntity(cmd)
If Not entityList Is Nothing Then
Return entityList
End If
End Using
Else
Using conn As SqlConnection = New SqlConnection(mConnectionString)
Using cmd As SqlCommand = conn.CreateCommand()
cmd.CommandType = Parameter.Type
cmd.CommandText = Parameter.Text
If Not Parameter.Items Is Nothing Then
For Each param As SqlParameter In Parameter.Items
cmd.Parameters.Add(param)
Next
End If
conn.Open()
entityList = BuildEntity(cmd)
If Not entityList Is Nothing Then
Return entityList
End If
End Using
End Using
End If
Return Nothing
End Function
End Class
next you will see a sample data access class using the above base to get a list of products
Public Class ProductRepository
Inherits Repository(Of Product)
Implements IProductRepository
Private mCache As IHttpCache
'This const is what you will use in your app
Public Sub New(ByVal cache As IHttpCache)
MyBase.New()
mCache = cache
End Sub
'This const is only used for testing so we can inject a connectin/transaction and have them roll'd back after the test
Public Sub New(ByVal cache As IHttpCache, ByVal connection As IDbConnection, ByVal transaction As IDbTransaction)
MyBase.New(connection, transaction)
mCache = cache
End Sub
Public Function GetProducts() As System.Collections.Generic.List(Of Product) Implements IProductRepository.GetProducts
Dim Parameter As New Parameter()
Parameter.Type = CommandType.StoredProcedure
Parameter.Text = "spGetProducts"
Dim productList As List(Of Product)
productList = MyBase.ExecuteReader(Parameter)
Return productList
End Function
'This function is used in each class that inherits from the base data access class so we can keep all the boring left-right mapping code in 1 place per object
Public Overrides Function BuildEntity(ByVal cmd As System.Data.SqlClient.SqlCommand) As System.Collections.Generic.List(Of Product)
Dim productList As New List(Of Product)
Using reader As SqlDataReader = cmd.ExecuteReader()
Dim product As Product
While reader.Read()
product = New Product()
product.ID = reader("ProductID")
product.SupplierID = reader("SupplierID")
product.CategoryID = reader("CategoryID")
product.ProductName = reader("ProductName")
product.QuantityPerUnit = reader("QuantityPerUnit")
product.UnitPrice = reader("UnitPrice")
product.UnitsInStock = reader("UnitsInStock")
product.UnitsOnOrder = reader("UnitsOnOrder")
product.ReorderLevel = reader("ReorderLevel")
productList.Add(product)
End While
If productList.Count > 0 Then
Return productList
End If
End Using
Return Nothing
End Function
End Class
And now in your unit test you can also inherit from a very simple base class that does your setup / rollback work - or keep this on a per unit test basis
below is the simple testing base class I used
Imports System.Configuration
Imports System.Data
Imports System.Data.SqlClient
Imports Microsoft.VisualStudio.TestTools.UnitTesting
Public MustInherit Class TransactionFixture
Protected mConnection As IDbConnection
Protected mTransaction As IDbTransaction
Private mConnectionString As String = ConfigurationManager.ConnectionStrings("Northwind.ConnectionString").ConnectionString
<TestInitialize()> _
Public Sub CreateConnectionAndBeginTran()
mConnection = New SqlConnection(mConnectionString)
mConnection.Open()
mTransaction = mConnection.BeginTransaction()
End Sub
<TestCleanup()> _
Public Sub RollbackTranAndCloseConnection()
mTransaction.Rollback()
mTransaction.Dispose()
mConnection.Close()
mConnection.Dispose()
End Sub
End Class
and finally - the below is a simple test using that test base class that shows how to test the entire CRUD cycle to make sure all the sprocs do their job and that your ado.net code does the left-right mapping correctly
I know this doesn't test the "spGetProducts" sproc used in the above data access sample, but you should see the power behind this approach to unit testing sprocs
Imports SampleApplication.Library
Imports System.Collections.Generic
Imports Microsoft.VisualStudio.TestTools.UnitTesting
<TestClass()> _
Public Class ProductRepositoryUnitTest
Inherits TransactionFixture
Private mRepository As ProductRepository
<TestMethod()> _
Public Sub Should-Insert-Update-And-Delete-Product()
mRepository = New ProductRepository(New HttpCache(), mConnection, mTransaction)
'** Create a test product to manipulate throughout **'
Dim Product As New Product()
Product.ProductName = "TestProduct"
Product.SupplierID = 1
Product.CategoryID = 2
Product.QuantityPerUnit = "10 boxes of stuff"
Product.UnitPrice = 14.95
Product.UnitsInStock = 22
Product.UnitsOnOrder = 19
Product.ReorderLevel = 12
'** Insert the new product object into SQL using your insert sproc **'
mRepository.InsertProduct(Product)
'** Select the product object that was just inserted and verify it does exist **'
'** Using your GetProductById sproc **'
Dim Product2 As Product = mRepository.GetProduct(Product.ID)
Assert.AreEqual("TestProduct", Product2.ProductName)
Assert.AreEqual(1, Product2.SupplierID)
Assert.AreEqual(2, Product2.CategoryID)
Assert.AreEqual("10 boxes of stuff", Product2.QuantityPerUnit)
Assert.AreEqual(14.95, Product2.UnitPrice)
Assert.AreEqual(22, Product2.UnitsInStock)
Assert.AreEqual(19, Product2.UnitsOnOrder)
Assert.AreEqual(12, Product2.ReorderLevel)
'** Update the product object **'
Product2.ProductName = "UpdatedTestProduct"
Product2.SupplierID = 2
Product2.CategoryID = 1
Product2.QuantityPerUnit = "a box of stuff"
Product2.UnitPrice = 16.95
Product2.UnitsInStock = 10
Product2.UnitsOnOrder = 20
Product2.ReorderLevel = 8
mRepository.UpdateProduct(Product2) '**using your update sproc
'** Select the product object that was just updated to verify it completed **'
Dim Product3 As Product = mRepository.GetProduct(Product2.ID)
Assert.AreEqual("UpdatedTestProduct", Product2.ProductName)
Assert.AreEqual(2, Product2.SupplierID)
Assert.AreEqual(1, Product2.CategoryID)
Assert.AreEqual("a box of stuff", Product2.QuantityPerUnit)
Assert.AreEqual(16.95, Product2.UnitPrice)
Assert.AreEqual(10, Product2.UnitsInStock)
Assert.AreEqual(20, Product2.UnitsOnOrder)
Assert.AreEqual(8, Product2.ReorderLevel)
'** Delete the product and verify it does not exist **'
mRepository.DeleteProduct(Product3.ID)
'** The above will use your delete product by id sproc **'
Dim Product4 As Product = mRepository.GetProduct(Product3.ID)
Assert.AreEqual(Nothing, Product4)
End Sub
End Class
I know this is a long example, but it helped to have a reusable class for the data access work, and yet another reusable class for my testing so I didn't have to do the setup/teardown work over and over again ;)

Have you tried DBUnit? It's designed to unit test your database, and just your database, without needing to go through your C# code.

If you think about the kind of code that unit testing tends to promote: small highly-cohesive and lowly-coupled routines, then you should pretty much be able to see where at least part of the problem might be.
In my cynical world, stored procedures are part of the RDBMS world's long-standing attempt to persuade you to move your business processing into the database, which makes sense when you consider that server license costs tend to be related to things like processor count. The more stuff you run inside your database, the more they make from you.
But I get the impression you're actually more concerned with performance, which isn't really the preserve of unit testing at all. Unit tests are supposed to be fairly atomic and are intended to check behaviour rather than performance. And in that case you're almost certainly going to need production-class loads in order to check query plans.
I think you need a different class of testing environment. I'd suggest a copy of production as the simplest, assuming security isn't an issue. Then for each candidate release, you start with the previous version, migrate using your release procedures (which will give those a good testing as a side-effect) and run your timings.
Something like that.

The key to testing stored procedures is writing a script that populates a blank database with data that is planned out in advance to result in consistent behavior when the stored procedures are called.
I have to put my vote in for heavily favoring stored procedures and placing your business logic where I (and most DBAs) think it belongs, in the database.
I know that we as software engineers want beautifully refactored code, written in our favorite language, to contain all of our important logic, but the realities of performance in high volume systems, and the critical nature of data integrity, require us to make some compromises. Sql code can be ugly, repetitive, and hard to test, but I can't imagine the difficulty of tuning a database without having complete control over the design of the queries.
I am often forced to completely redesign queries, to include changes to the data model, to get things to run in an acceptable amount of time. With stored procedures, I can assure that the changes will be transparent to the caller, since a stored procedure provides such excellent encapsulation.

I am assuming that you want unit testing in MSSQL. Looking at DBUnit there are some limitations in it's support for MSSQL. It doesn't support NVarChar for instance. Here are some real users and their problems with DBUnit.

Good question.
I have similar problems, and I have taken the path of least resistance (for me, anyway).
There are a bunch of other solutions, which others have mentionned. Many of them are better / more pure / more appropriate for others.
I was already using Testdriven.NET/MbUnit to test my C#, so I simply added tests to each project to call the stored procedures used by that app.
I know, I know. This sounds terrible, but what I need is to get off the ground with some testing, and go from there. This approach means that although my coverage is low I am testing some stored procs at the same time as I am testing the code which will be calling them. There is some logic to this.

I'm in the exact same situation as the original poster. It comes down to performance versus testability. My bias is towards testability (make it work, make it right, make it fast), which suggests keeping business logic out of the database. Databases not only lack the testing frameworks, code factoring constructs, and code analysis and navigation tools found in languages like Java, but highly factored database code is also slow (where highly factored Java code is not).
However, I do recognize the power of database set processing. When used appropriately, SQL can do some incredibly powerful stuff with very little code. So, I'm ok with some set-based logic living in the database even though I will still do everything I can to unit test it.
On a related note, it seems that very long and procedural database code is often a symptom of something else, and I think such code can be converted to testable code without incurring a performance hit. The theory is that such code often represents batch processes that periodically process large amounts of data. If these batch processes were to be converted into smaller chunks of real-time business logic that runs whenever the input data is changed, this logic could be run on the middle-tier (where it can be tested) without taking a performance hit (since the work is done in small chunks in real-time). As a side-effect, this also eliminates the long feedback-loops of batch process error handling. Of course this approach won't work in all cases, but it may work in some. Also, if there is tons of such untestable batch processing database code in your system, the road to salvation may be long and arduous. YMMV.

But I get the impression you're actually more concerned with performance, which isn't really the preserve of unit testing at all. Unit tests are supposed to be fairly atomic and are intended to check behaviour rather than performance. And in that case you're almost certainly going to need production-class loads in order to check query plans.
I think there are two quite distinct testing areas here: the performance, and the actual logic of the stored procedures.
I gave the example of testing the db performance in the past and, thankfully, we have reached a point where the performance is good enough.
I completely agree that the situation with all the business logic in the database is a bad one, but it's something that we've inherited from before most of our developers joined the company.
However, we're now adopting the web services model for our new features, and we've been trying to avoid stored procedures as much as possible, keeping the logic in the C# code and firing SQLCommands at the database (although linq would now be the preferred method). There is still some use of the existing SPs which was why I was thinking about retrospectively unit testing them.

You can also try Visual Studio for Database Professionals. It's mainly about change management but also has tools for generating test data and unit tests.
It's pretty expensive tho.

We use DataFresh to rollback changes between each test, then testing sprocs is relatively easy.
What is still lacking is code coverage tools.

I do poor man's unit testing. If I'm lazy, the test is just a couple of valid invocations with potentially problematic parameter values.
/*
--setup
Declare #foo int Set #foo = (Select top 1 foo from mytable)
--test
execute wish_I_had_more_Tests #foo
--look at rowcounts/look for errors
If ##rowcount=1 Print 'Ok!' Else Print 'Nokay!'
--Teardown
Delete from mytable where foo = #foo
*/
create procedure wish_I_had_more_Tests
as
select....

LINQ will simplify this only if you remove the logic from your stored procedures and reimplement it as linq queries. Which would be much more robust and easier to test, definitely. However, it sounds like your requirements would preclude this.
TL;DR: Your design has issues.

We unit test the C# code that calls the SPs.
We have build scripts, creating clean test databases.
And bigger ones we attach and detach during test fixture.
These tests could take hours, but I think it`s worth it.

One option to re-factor the code (I'll admit a ugly hack) would be to generate it via CPP (the C preprocessor) M4 (never tried it) or the like. I have a project that is doing just that and it is actually mostly workable.
The only case I think that might be valid for is 1) as an alternative to KLOC+ stored procedures and 2) and this is my cases, when the point of the project is to see how far (into insane) you can push a technology.

Oh, boy. sprocs don't lend themselves to (automated) unit testing. I sort-of "unit test" my complex sprocs by writing tests in t-sql batch files and hand checking the output of the print statements and the results.

The problem with unit testing any kind of data-related programming is that you have to have a reliable set of test data to start with. A lot also depends on the complexity of the stored proc and what it does. It would be very hard to automate unit testing for a very complex procedure that modified many tables.
Some of the other posters have noted some simple ways to automate manually testing them, and also some tools you can use with SQL Server. On the Oracle side, PL/SQL guru Steven Feuerstein worked on a free unit testing tool for PL/SQL stored procedures called utPLSQL.
However, he dropped that effort and then went commercial with Quest's Code Tester for PL/SQL. Quest offers a free downloadable trial version. I'm on the verge of trying it out; my understanding is that it is good at taking care of the overhead in setting up a testing framework so that you can focus on just the tests themselves, and it keeps the tests so you can reuse them in regression testing, one of the great benefits of test-driven-development. In addition, it is supposed to be good at more than just checking an output variable and does have provision for validating data changes, but I still have to take a closer look myself. I thought this info might be of value for Oracle users.

Related

VB Unit Test with Database connection

I am new to the unit testing project and I am wondering how I can create and use a database connection. Can someone offer a VB.Net example? It seems most are in c#
This is as far as I have got! The function GetCorrectedTransactionEffectiveDate expects a connection object as it looks up some goodies in the database
So I am a little bewildered as to how this should be done in unit testing??
<TestClass()> Public Class UnitTest1
<DataSource ("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=\\BServer\bucshared\Systems\ApplicationData\BOne\ENV_DEV\Database\BOne.accdb; Jet OLEDB:Database Password=password"), ]
<TestMethod()> Public Sub RecordedDateTest()
Dim DateToday As New Date(2014, 6, 14)
Dim TransactionDate As New Date(2014, 7, 15)
Dim oUtil As New BerkleyOne.clsDbUtil({Database connection})
Dim Res as date = oUtil.GetCorrectedTransactionEffectiveDate(TransactionDate)
End Sub
End Class
TL;DR I don't think the DataSourceAttribute is what you want.
According to the class's documentation, its purpose is to identify a data store for inputs to the test method.
So in your case, the way you'd want to use this would be to:
have the test method accept a Date (or DateTime) parameter
pass that parameter to oUtil.GetCorrectedTransactionEffectiveDate()
store the set of input parameter values in a database
use the DataSourceAttribute would identify how to locate the test input values
... but (from what I can tell) this would be more of an integration test than a unit test.
To be able to unit test your code, your code should follow the Single Responsibility Principle. But it appears the GetCorrectedTransactionEffectiveDate() method is doing at least two things:
looking up information in a database (perhaps weekend days and holidays);
performing some sort of date calculations.
As I noted in a comment, in the world of automated testing -- which appears to be your goal -- a unit test is supposed to test code (naturally), and be able to do so quickly. Therefore, you don't want the code you're unit testing to be performing network, database, or even disk I/O.
A common approach would be to separate the date-calculation logic and database access as follows:
define an interface for the data-retrieval mechanism (GetFooById(), GetFooByName(), etc.) -- this is commonly referred to as a "repository";
implement the repository interface with a "real" class that fetches the values from a database / web service / whatever;
implement the interface with a mock class that (for example) always returns a hard-coded value, or collection of values;
better yet, create the mock implementation using a mocking framework -- this will allow you to construct the mock object and specify what it should return in your test method ("on the fly");
modify clsDbUtil to accept an IFooRepository as a constructor parameter (rather than a connection string) and delegate all the data-retrieval work to the repository.
Now you have a test for your date calculation that should run in at most a few milliseconds. So the next time a "Y2K" type calendar altering event occurs, you'll be able to quickly verify (regression test) any changes you have to make to the date calculations.
An additional benefit is that the repository interface is (or should be) media-agnostic. If you have to go from a database to XML or JSON files or a web service, none of the date calculation code should be affected.
I realize all this isn't trivial. And it's not always feasible to modify the source code for clsDbUtil or whatever. You can still test the method; again, it'll be more of an integration test, but (as comments have noted) that's still much better than no tests at all.
Automated testing (unit / integration / acceptance testing) is a large topic;I've only scratched the surface here.
Assuming I haven't frightened you away, I would recommend you read Roy Osherove's The Art of Unit Testing. If you're working with a code base that doesn't lend itself (easily) to automated testing, I'd also recommend Michael Feathers' Working Effectively with Legacy Code.

NSubstitute Test against classes (VB.net)

First of all I'm a beginner in unit tests. For my tests i want to use NSubstitute, so I read the tutorial on the website and also the mock comparison from Richard Banks. Both of them are testing against interfaces, not against classes. The statement is "Generally this [substituted] type will be an interface, but you can also substitute classes in cases of emergency."
Now I'm wondering about the purpose of testing against interfaces. Here is the example interface from the NSubstitute website (please note, that i have converted the C#-code in VB.net):
Public Interface ICalculator
Function Add(a As Double, b As Double) As Double
Property Mode As String
Event PoweringUp As EventHandler
End Interface
And here is the unit test from the website (under the NUnit-Framework):
<Test>
Sub ReturnValue_For_Methods()
Dim calculator = Substitute.For(Of ICalculator)()
calculator.Add(1, 2).Returns(3)
Assert.AreEqual(calculator.Add(1, 2), 3)
End Sub
Ok, that works and the unit test will perform successful. But what sense makes this? This do not test any code. The Add-Method could have any errors, which will not be detected when testing against interfaces - like this:
Public Class Calculator
Implements ICalculator
Public Function Add(a As Double, b As Double) As Double Implements ICalculator.Add
Return 1 / 0
End Function
...
End Class
The Add-Method performs a division by zero, so the unit test should fail - but because of testing against the interface ICalculator the test is successful.
Could you please help me to understand that? What sense makes it, not to test the code but the interface?
Thanks in advance
Michael
The idea behind mocking is to isolate a class we are testing from its dependencies. So we don't mock the class we are testing, in this case Calculator, we mock an ICalculator when testing a class that uses an ICalculator.
A small example is when we want to test how something interacts with a database, but we don't want to use a real database for some quick tests. (Please excuse the C#.)
[Test]
public void SaveTodoItemToDatabase() {
var substituteDb = Substitute.For<IDatabase>();
var todoScreen = new TodoViewModel(substituteDb);
todoScreen.Item = "Read StackOverflow";
todoScreen.CurrentUser = "Anna";
todoScreen.Save();
substituteDb.Received().SaveTodo("Read StackOverflow", "Anna");
}
The idea here is we've separated the TodoViewModel from the details of saving to the database. We don't want to worry about configuring a database, or getting a connection string, or having data from previous test runs interfering with future tests runs. Testing with a real database can be very valuable, but in some cases we just want to test a smaller unit of functionality. Mocking is one way of doing this.
For the real app, we'll create a TodoViewModel with a real implementation of IDatabase, and provided that implementation follows the expected contract of the interface then we can have a reasonable expectation that it will work.
Hope this helps.
Update in response to comment
The test for TodoViewModel assumes the implementation of the IDatabase works so we can focus on that class' logic. This means we'll probably want a separate set of tests for implementations of IDatabase. Say we have a SqlServerDbimplementation, then we can have some tests (probably against a real database) that check it does what it promises. In those tests we'll no longer be mocking the database interface, because that's what we're testing.
Another thing we can do is have "contract tests" which we can apply to any IDatabase implementation. For example, we could have a test that says for any implementation, saving an item then loading it up again should return the same item. We can then run those tests against all implementations, SqlDb, InMemoryDb, FileDb etc. In this way we can state our assumptions about the dependencies we're mocking, then check that the actual implementations meet our assumptions.

VB.NET EF - Creating a generic function that inspects an entity & permits entity save? Code First

I know, this sounds strange... But this is what I'm trying to do, how far along I am, and why I'm even doing it in the first place:
The class is configured, as an instance, with the name of the class. The context class is also available for preparing the batch class.
Pass a generic list of objects (entities, really) to a class.
That class (which can be pre-configured for this particular class) does just one thing: Adds a new entity to the backend database via DBContext.
The same class can be used for any entity described in the Context class, but each instance of the class is for just one entity class.
I want to write a blog article on my blog showing the performance of dynamically adjusting the batch size when working with EF persistence, and how constantly looking for the optimum batch size can be done.
When I say "DBContext" class, I mean a class similar to this:
Public Class CarContext
Inherits DbContext
Public Sub New()
MyBase.New("name=vASASysContext")
End Sub
Public Property Cars As DbSet(Of Car)
Protected Overrides Sub OnModelCreating(modelBuilder As DbModelBuilder)
modelBuilder.Configurations.Add(Of Car)(New CarConfiguration())
End Sub
End Class
That may sound confusing. Here is a use-case (sorta):
Need: I need to add a bunch of entities to a database. Let's call them cars, for lack of an easier example. Each entity is an instantiation of a car class, which is configured via Code First EF6 to be manipulated like any other class that is well defined in DBContext. Just like in real world classes, not all attributes are mapped to a database column.
High Level:
-I throw all the entities into a generic list, the kind of list supported by our 'batch class'.
-I create an instance of our batch class, and configure it to know that it is going to be dealing with car entities. Perhaps I even pass the context class that has the line:
-Once the batch class is ready (may only need the dbcontext instance and the name of the entity), if is given the list of entities via a function something like:
Public Function AddToDatabase(TheList as List(Of T)) As Double
The idea is pretty simple. My batch class is set to add these cars to the database, and will add the cars in the list it was given. The same rules apply to adding the entities in batch as they do when adding via DBContext normally.
All I want to happen is that the batch class itself does not need to be customized for each entity type it would deal with. Configuration is fine, but not customization.
OK... But WHY?
Adding entities via a list is easy. The secret sauce is the batch class itself. I've already written all the logic that determines the rate at which the entities are added (in something akin to "Entities per Second). It also keeps track of the best performance, and varies the batch size occasionally to verify.
The reason the function above (called AddToDatabase() for illustrative purposes) returns a double is that the double represents the amount of entities that were added per second.
Ultimately, the batch class returns to the calling method the number of entities to put in the next batch to maintain peak performance.
So my question for you all is how I can determine the entity fields, and use that to save the entities in a given list.
Can it be as simple as copying the entities? Do I even need to use reflection at all, and have the class use 'GetType' to figure out the entity class in the list (cars)?
How would you go about this?
Thank yu very much in advance for your reading this far, and your thoughtful response..
[Don't read further unless you are into this kind of thing!]
The performance of a database operation isn't linear, and is dependent on several factors (memory, CPU load, DB connectivity, etc.), and the DB is not always on the same machine as the application. It may even involve web services.
Your first instinct is to say that more entities in a single batch is best, but that is probably not true in most cases. When you add entities to a batch add, at first you see an increase in performance (increase in entities/second). But as the batch size increases, the performance may reach a maximum, then start to decrease (for a lot of reasons, not excluding environmental, such as memory). For non-memory issues, the batch performance may start to level off, and we haven't even discussed the impact of the batch on the system itself.
So in the case of a leveling off, I don't want my batch size any larger than it needs to be to be in the neighborhood of peak performance. Also, with smaller batch sizes, the class is able to evaluate the system's performance more frequently.
Being new to Code First and EF6, I can see that there must be some way to use reflection to determine how to take the given list of entities, break them apart into the entity attributes, and persist them via the EF itself.
So far, I do it this way because I need to manually configure each parameter in the INSERT INTO...
For Each d In TheList
s = "INSERT INTO BDTest (StartAddress, NumAddresses, LastAddress, Duration) VALUES (#StartAddress, #NumAddresses, #LastAddress, #Duration)"
Using cmd As SqlCommand = New SqlCommand(s, conn)
conn.Open()
cmd.Parameters.Add("#StartAddress", Data.SqlDbType.NVarChar).Value = d.StartIP
cmd.Parameters.Add("#NumAddresses", Data.SqlDbType.Int).Value = d.NumAddies
cmd.Parameters.Add("#LastAddress", Data.SqlDbType.NVarChar).Value = d.LastAddie
singleRate = CDbl(Me.TicksPerSecond / swSingle.Elapsed.Ticks)
cmd.Parameters.Add("#Duration", Data.SqlDbType.Int).Value = singleRate
cmd.ExecuteNonQuery()
conn.Close()
End Using
Next
I need to steer away in this test code from using SQL, and closer toward EF6...
What are your thoughts?
TIA!
So, there are two issues I see that you should tackle. First is your question about creating a "generic" method to add a list of entities to the database.
Public Function AddToDatabase(TheList as List(Of T)) As Double
If you are using POCO entities, then I would suggest you create an abstract base class or interface for all entity classes to inherit from/implement. I'll go with IEntity.
Public Interface IEntity
End Interface
So your Car class would be:
Public Class Car
Implements IEntity
' all your class properties here
End Class
That would handle the generic issue.
The second issue is one of batch inserts. A possible implementation of your method could be as follows. This will insert a batches of 100, modify the paramater inputs as needed. Also replace MyDbContext with the actual Type of your DbContext.
Public Function AddToDatabase(ByVal entities as List(Of IEntity)) As Double
Using scope As New TransactionScope
Dim context As MyDbContext
Try
context = new MyDbContext()
context.Configuration.AutoDetectChangesEnabled = false
Dim count = 0
For Each entityToInsert In entities
count += 1
context = AddToContext(context, entityToInsert, count, 100, true)
Next
context.SaveChanges()
Finally
If context IsNot Nothing
context.Dispose()
End If
End Try
scope.Complete()
End Using
End Function
Private Function AddToContext(ByVal context As MyDbContext, ByVal entity As IEntity, ByVal count As Integer, ByVal commitCount As Integer, ByVal recreateContext as Boolean) As MyDbContext
context.Set(entity.GetType).Add(entity)
If (count % commitCount = 0)
context.SaveChanges()
If (recreateContext)
context.Dispose()
context = new MyDbContext()
context.Configuration.AutoDetectChangesEnabled = false
End If
End If
return context
End Function
Also, please apologize if this is not 100% perfect as I mentally converted it from C# to VB.Net while typing. It should be very close.

How can I create an "enum" type property value dynamically

I need to create a property in a class that will give the programmer a list of values to choose from. I have done this in the past using the enums type.
Public Enum FileType
Sales
SalesOldType
End Enum
Public Property ServiceID() As enFileType
Get
Return m_enFileType
End Get
Set(ByVal value As enenFileType)
m_enFileType = value
End Set
End Property
The problem is I would like to populate the list of values on init of the class based on SQL table values. From what I have read it is not possible to create the enums on the fly since they are basically constants.
Is there a way I can accomplish my goal possibly using list or dictionary types?
OR any other method that may work.
I don't know if this will answer your question, but its just my opinion on the matter. I like enums, mostly because they are convenient for me, the programmer. This is just because when I am writing code, using and enum over a constant value gives me not only auto-complete when typing, but also the the compile time error checking that makes sure I can only give valid enum values. However, enums just don't work for run-time defined values, since, like you say, there are compile time defined constants.
Typically, when I use flexible values that are load from an SQL Table like in your example, I'll just use string values. So I would just store Sales and SalesOldType in the table and use them for the value of FileType. I usually use strings and not integers, just because strings are human readable if I'm looking at data tables while debugging something.
Now, you can do a bit of a hybrid, allowing the values to be stored and come from the table, but defining commonly used values as constants in code, sorta like this:
Public Class FileTypeConstants
public const Sales = "Sales"
public const SalesOldType = "SalesOldType"
End Class
That way you can make sure when coding with common values, a small string typo in one spot doesn't cause a bug in your program.
Also, as a side note, I write code for and application that is deployed internally to our company via click-once deployment, so for seldom added values, I will still use an enum because its extremely easy to add a value and push out an update. So the question of using and enum versus database values can be one of how easy it is to get updates to your users. If you seldom update, database values are probably best, if you update often and the updates are not done by users, then enums can still work just as well.
Hope some of that helps you!
If the values aren't going to change after the code is compiled, then it sounds like the best option is to simply auto-generate the code. For instance, you could write a simple application that does something like this:
Public Shared Sub Main()
Dim builder As New StringBuilder()
builder.AppendLine("' Auto-generated code. Don't touch!! Any changes will be automatically overwritten.")
builder.AppendLine("Public Enum FileType")
For Each pair As KeyValuePair(Of String, Integer) In GetFileTypesFromDb()
builder.AppendLine(String.Format(" {0} = {1}", pair.Key, pair.Value))
End For
builder.AppendLine("End Enum")
File.WriteAllText("FileTypes.vb", builder.ToString())
End Sub
Public Function GetFileTypesFromDb() As Dictionary(Of String, Integer)
'...
End Function
Then, you could add that application as a pre-build step in your project so that it automatically runs each time you compile your main application.

Linq To SQL DAL and lookup data source

I am learning linq to sql and I am trying to setup a lookup combo box in my web page. I was using a linq data source but I then moved my linqtosql code to it's own class library for my DAL dll (took out of app_code folder). So, I am converting the page code to be able to still have lookups driven now by a biz object.
Here's what I have done in my biz layer...
Public Class KosherTypes
Public Shared Function GetKosherTypes() As List(Of KosherTypeLookup)
Dim db As New DBDataContext
Dim types = From kt In db.KosherTypes _
Where kt.IsDeleted = False _
Order By kt.Name _
Select New KosherTypeLookup With {.Name = kt.Name, .ID = kt.KosherTypeID}
Return types.ToList
End Function
End Class
I then setup an object data source and mapped it to this class.
I have a few questions as when I was doing internet searches I didn't find anyone that seemed to be doing this and yet I know lookup tables / combo boxes are common...
Did I totally just miss something and there are better way(s) to do this?
I went with returning a list but I could have returned an IEnumerable or IQueryable. In my reading it seemed IQueryable had more functionality for linq to sql but since I am returning a simple two column list then I only need to return an IEnumerable or List. I went with a List since it's strongly typed. Did I make a good decision? AKA - Should I just have returned and IEnumerable or perhaps gone with IQueryable?
Thanks!
I'll answer in reverse order:
I wouldn't use IQueryable outside of your repos / DAL for the simple reason that since execution is deferred, you lose control of what exactly is executed (i.e., an aribtrary function could be assigned as a delegate for WHERE), making maintenance and testing much harder. I don't see an issue with you returning an IEnumberable(Of KosherTypeLookup) though.
If the lookup is a static lookup that never or rarely changes, you might want to look at a way to cache the lookup after the first use, to avoid having to hit the db every time that box is called. It really depends on your expected load, though. As always, performance/load testing will highlight where you need to optimize.