I am using NHibernate and Rhinomocks and having trouble testing what I want. I would like to test the following repository method without hitting the database (where _session is injected into the repository as ISession):
public class Repository : IRepository
{
(... code snipped for brevity ...)
public T FindBy<T>(Expression<Func<T, bool>> where)
{
return _session.Linq<T>().Where(where).FirstOrDefault();
}
}
My initial approach is to mock ISession, and return an IQueryable stub (hand coded) when Linq is called. I have a IList of Customer objects I would like to query in memeory to test my Linq query code without hitting the db. And I'm not sure what this would look like. Do I write my own implementation of IQueryable? If so, has someone done this for this approach? Or do I need to look at other avenues?
Thanks!
How I've done this test is to not pass the expression to the repository, instead expose IQueryable giving the repository an interface such as:
public interface IRepository<T>
{
IQueryable<T> All();
// whatever else you want
}
Easily implemented like so:
public IQueryable<T> All()
{
return session.Linq<T>();
}
This means that instead of calling your method on the repository like:
var result = repository.FindBy(x => x.Id == 1);
You can do:
var result = repository.All().Where(x => x.Id == 1);
Or the LINQ syntax:
var result = from instance in repository.All()
where instance.Id == 1
select instance;
This then means you can get the same test by mocking the repository out directly which should be easier. You just get the mock to return a list you have created and called AsQueryable() on.
As you have pointed out, the point of this is to let you test the logic of your queries without involving the database which would slow them down dramatically.
From my point of view is this would be considered Integration Testing. NHibernate has it's own tests that it passes and it seems to me like you're trying duplicate some of those tests in your own test suite. I'd either add the NHibernate code and tests to your project and add this there along with their tests, thats if they don't have one very similiar, and use their methods of testing or move this to an Integration testing scenario and hit the database.
If it's just the fact you don't want to have to setup a database to test against you're in luck since you're using NHibernate. With some googling you can find quite a few examples of how to use SQLite to "kinda" do integration testing with the database but keep it in memory.
Related
I haven't messed with OData in a while, but I remember it being really useful. So I've gone for a .NetCore 3.1 EFCore + OData architecture for my API. With a view to make it fully generic etc. etc.
Doing a little test, I can get the expected results from my browser: e.g.
https://localhost:44310/things?someidfield=44
Cool I get back the JSON I was expecting! But it's sooo slow, why? Looking at the SQL (Profiler) I can see it has no WHERE clause, it's getting everything from the database and filtering it in memory, on half a million records?
What am I missing here? I've tried a number of ways to write the GET method. Starting with not passing any queryOptions (which worked! but same result underneath) and then the below where I explicitly apply the options to by EFCore entity.
[HttpGet]
[EnableQuery]
public async Task<IEnumerable<thing>> GetThingsAsync(ODataQueryOptions<thing> queryOptions)
{
return await queryOptions.ApplyTo(DB.thing).Cast<thing>().ToListAsync();
}
The result set is being loaded in memory because you're calling ToListAsync() and returning an IEnumerable.
If your GetThingsAsync method returns an IQueryable<T> (instead of IEnumerable<T>), then the query will be applied to the database and only the filtered data will be fetched.
If DB.thing is an EFCore DbSet (which implements IQueryable<T>), then you can simplify your method as
[HttpGet]
[EnableQuery]
public Task GetThingsAsync()
{
return DB.thing;
}
Furthermore, like some in the comment already mentioned, the correct syntax for filtering in your case would be ?$filter=someidfield eq 44
I wrote a sql script and in it I created a table ;
Now I need to know ,how I can execute this script? (with which codes?)
And I have another question : where? where I must write this codes?(which folder in zend project?)
if it is possible for you please explain with an example.thanks
Creating tables in the database
Zend Framework is not supposed to be the one creating the tables, thus, my suggestion is to run those scripts in other environment.
The fastest one is, probably, the very own SQL shell, but you can use another software such as MySQLWorkbench if you are using MySQL.
Once the tables are created, the access to the tables is made this way:
Introduction
When you are using Zend Framework, you are making use of the MVC pattern. I suggest you to read what is that: Wikipedia MVC
If you read the Wikipedia link, you probably know now that the acess to the database is going to be made by the model.
Thus, if you followed the recommended project structure that Zend provides you will have a models folder under your application folder. There, you are supposed to implement the classes that will make access to the DB.
But well... you now know where to locate those classes but you will ask me: how? It's easy if you know where to search. ZF provides an abstract class called Zend_Db_Table_Abstract that has all the methods that will make your life easier talking about interaction with your database's tables. This is the class that your classes should implement.
Example
Let's suppose you've got a page in your website in which you want to show to the user a list of products of your local store. You have a table in your database called "products" in which you have all the useful information such us name, price and availability.
You will have a controller with an action called indexAction() or listAction() this action is prepared to send the data to your view and will look like:
class Store_ProductsController extends Zend_Controller_Action {
public function indexAction(){
//TODO: Get data from the DataBase into $products variable
$this->view->products = $products;
}
}
And your view file will that that products variable and do sutff with it.
But now comes the magic, you will have a class that will access to the database as I've said, it'll be like:
class Model_Store_Products extends Zend_Db_Table_Abstract{
protected $_name = 'products';
public function getAllProducts(){
$select = $this->$select()
->from(array('P'=>$this->_name),
array('id', 'name', 'price', availability));
$productsArray = $this->fetchAll($select);
return $productsArray;
}
}
And ta-da, you have your array of products ready to be used by the controller:
class Store_ProductsController extends Zend_Controller_Action {
public function indexAction(){
$model = new Model_Store_Products();
$products = $model->getAllProducts();
$this->view->products = $products;
}
}
It can be said that, since fetchAll is public function, and our select does basically nothing but set which columns do we want (it doesn't even have a where clause), in this case, it would be easier to call the fetchAll directly from the controller with no where and it will recover the whole table (all columns):
class Store_ProductsController extends Zend_Controller_Action {
public function indexAction(){
$model = new Model_Store_Products();
$products = $model->fetchAll();
$this->view->products = $products;
}
}
Thus, our function in the model is not even needed.
This is the basic information of how to access to the database using Zend Framework. Further information of how to create the Zend_Db_Table_Select object can be found here.
I hope this helps.
I doubt this is just specific to NHibernate. But I have code as follows....
public class ClientController : ApiController
{
// GET /api/<controller>
public IQueryable<Api.Client> Get()
{
return Repositories.Clients.Query().Select(c => Mapper.Map<Client, Api.Client>(c));
}
I basically want to Query the database using the Odata criteria.... get the relevant 'Client' objects, and the convert them to the DTO 'Api.Client'.
But... the code as is, doesn't work. Because NHibernate doesn't know what to do the with the Mapper.... It really wants the query to come before the .Select. But I'm not sure I can get the Odata Query first?
It will work if I do
return Repositories.Clients.Query().Select(c => Mapper.Map<Client, Api.Client>(c)).ToList().AsQueryable();
But that's a bit sucky as you have to get ALL the clients from the database to do the OData query on.
Is there anyway to get the "Select" to happen after the OData query? Or another way to approach this?
I did not test it yet but the Open Source project NHibernate.OData could be useful for you.
The problem is that you are trying to execute C# code (Mapper.Map) inside the NH call (that is translated to SQL)
You'd have to map Api.Client manually, or create a Mapper implementation that returns an Expression<Func<Client, Api.Client>> and pass it directly as a parameter to Select().
Even with that, I'm not sure if NHibernate will translate it. But you can try.
AutoMapper supports this scenario with the Queryable Extensions
public IQueryable<Api.Client> Get() {
return Repositories.Clients.Query().Select(c => Mapper.Map<Client, Api.Client>(c));
}
becomes
public IQueryable<Api.Client> Get() {
return Repositories.Clients.Query().ProjectTo<Api.Client>(mapper.ConfigurationProvider);
}
All,
I have a requirement to hide my EF implementation behind a Repository. My simple question: Is there a way to execute a 'find' across both a DbSet AND the DbSet.Local without having to deal with them both.
For example - I have standard repository implementation with Add/Update/Remove/FindById. I break the generic pattern by adding a FindByName method (for demo purposes only :). This gives me the following code:
Client App:
ProductCategoryRepository categoryRepository = new ProductCategoryRepository();
categoryRepository.Add(new ProductCategory { Name = "N" });
var category1 = categoryRepository.FindByName("N");
Implementation
public ProductCategory FindByName(string s)
{
// Assume name is unique for demo
return _legoContext.Categories.Where(c => c.Name == s).SingleOrDefault();
}
In this example, category1 is null.
However, if I implement the FindByName method as:
public ProductCategory FindByName(string s)
{
var t = _legoContext.Categories.Local.Where(c => c.Name == s).SingleOrDefault();
if (t == null)
{
t = _legoContext.Categories.Where(c => c.Name == s).SingleOrDefault();
}
return t;
}
In this case, I get what I expect when querying against both a new entry and one that is only in the database. But this presents a few issues that I am confused over:
1) I would assume (as a user of the repository) that cat2 below is not found. But it is found, and the great part is that cat2.Name is "Goober".
ProductCategoryRepository categoryRepository = new ProductCategoryRepository();
var cat = categoryRepository.FindByName("Technic");
cat.Name = "Goober";
var cat2 = categoryRepository.FindByName("Technic");
2) I would like to return a generic IQueryable from my repository.
It just seems like a lot of work to wrap the calls to the DbSet in a repository. Typically, this means that I've screwed something up. I'd appreciate any insight.
With older versions of EF you had very complicated situations that could arise quite fast due to the required references. In this version I would recomend not exposing IQueryable but ICollections or ILists. This will contain EF in your repository and create a good seperation.
Edit: furthermore, by sending back ICollection IEnumerable or IList you are restraining and controlling the queries being sent to the database. This will also allow you to fine tune and maintain the system with greater ease. By exposing IQueriable, you are exposing yourself to side affects which occur when people add more to the query, .Take() or .Where ... .SelectMany, EF will see these additions and will generate sql to reflect these uncontrolled queries. Not confining the queries can result in queries getting executed from the UI and is more complicated tests and maintenance issues in the long run.
since the point of the repository pattern is to be able to swap them out at will. the details of DbSets should be completly hidden.
I think that you're on a good path. The only thing I probaly ask my self is :
Is the context long lived? if not then do not worry about querying Local. An object that has been Inserted / Deleted should only be accessible once it has been comitted.
if this is a long lived context and you need access to deleted and inserted objects then querying the Local is a good idea, but as you've pointed out, you may run into difficulties at some point.
I'm building a new app that is using NHibernate to generate the database schema but i can see a possible problem in the future.
Obviously you use all the data from your database is cleared when you update the schema but what stratagies do people use to restore any data to the new database. I am aware that massive changes to the schema will make this hard but was wondering how other people have dealt with this problem.
Cheers
Colin G
PS I will not be doing this against the live database only using it to restore test data for integration test and continuous integration
When testing, we use NHibernate to create the database, then a series of builders to create the data for each test fixture. We also use Sqlite for these tests, so they are lightening fast.
Our builders look a something like this:
public class CustomerBuilder : Builder<Customer>
{
string firstName;
string lastName;
Guid id = Guid.Empty;
public override Customer Build()
{
return new Customer() { Id = id, FirstName = firstName, LastName = }
}
public CustomerBuilder WithId(Guid newId)
{
id= newId;
return this;
}
public CustomerBuilder WithFirstName(string newFirstName)
{
firstName = newFirstName;
return this;
}
public CustomerBuilder WithLastName(string newLastName)
{
lastName = newLastName;
return this;
}
}
and usage:
var customer = new CustomerBuilder().WithFirstName("John").WithLastName("Doe").Build();
Because every line of code is written with TDD, we build up a comprehensive suite of data from scatch and will generally refactor some of it to factories that will wrap the above and make it a breeze to get dummy data in.
I think it is a good thing in many situations to let NHibernate generate the schema for you. To recreate the test data you either use code driven by a testing framework (such as NUnit) or you could export your test data as a SQL script which you can run after you have updated the schema.
Just a quick question directed #Chris Canal-
I understand that using a fluent interface to build your objects makes it look readable, but is it really worth the extra effort required to write these "builders" when you can use C# 3.0 syntax?
i.e. take your example:
var customer = new CustomerBuilder().WithFirstName("John").WithLastName("Doe").Build();
is this really worth the effort in constructing a "builder" when instead you can do this (which is arguably just as readable, and in fact less code)?:
var customer = new Customer { FirstName = "John", LastName = "Doe" };
We don't update the schema from NHibernate. We use SQLCompare to move database schemas across environments. SQLCompare does this non-destructively.
If you're already using NHibernate I would recommend creating the test data with code.
We do it similarly at our company. We use NHibernate to generate the database for our development and testing purposes. For testing we use SQLite as the back-end and just simply generate the test data separately for each test test suite.
When updating our staging/production servers we use SQLCompare and some manual editing if changes are more complex.