How to unit test azure function using Xunit and Moq - asp.net-core

I am very new to unit tests and recently started learning it from various online resources.
But still it confuses me when I need to implement it in my code.
For the given image which I have attached here, could anyone of you suggest me how should I start or where to start?
This is Azure function which I will be creating unit test for, framework/library I would prefer is Xunit and moq.

As mentioned in a comment, a good place to start when unit testing is looking at your code and identifying the different "paths" it can take and what the result of that path will be.
if (inventoryRequest != null)
{
// path 1
await _inventoryService.ProcessRequest(inventoryRequest);
_logger.LogInformation("HBSI Inventory Queue trigger function processed.");
}
else
{
// path 2
_logger.LogInformation("Unable to process HBSI Rate plan Queue.");
}
In your code, because of your if statement, there are 2 possible paths which will end in 2 different results = 2 unit tests.
Now you can start creating your unit tests but first you need to find out what you need to set up to be able to trigger your code.
private readonly ILogger _logger;
private readonly IInventoryService _inventoryService;
public InventoryServiceBusFunction(ILogger logger, IInventoryService inventoryService)
{
_logger = logger;
_inventoryService = inventoryService;
}
You have some dependencies being passed into your constructor with interfaces - great, this means we can mock them. We want to mock dependencies in unit tests because we want to control their behaviour for the tests. Also, mocking the dependencies negates any "real" behaviour the dependency might be performing i.e. database operations, API calls etc.
Using Moq we can mock the objects like so:
public class InventoryServiceBusFunctionTests
{
private readonly Mock<ILogger> _mockLogger = new Mock<ILogger>();
private readonly Mock<IInventoryService> _mockInventoryService = new Mock<IInventoryService>();
...
We will use these mocks later to make verifications on behaviour we expect to happen.
Next, we need to create an instance of the actual class we want to test.
// using a constructor in the test class will run this code before each test
public InventoryServiceBusFunctionTests()
{
// pass the mocked objects to initialize class
_inventoryServiceBusFunction = new InventoryServiceBusFunction(_mockLogger.Object, _mockInventoryService.Object);
}
Now that we have an instance of the InventoryServiceBusFunction class, we can use any of the public properties/methods in our tests.
[Fact]
public async Task GivenInventoryRequest_WhenFunctionRuns_ThenInventoryServiceProcessesRequest()
{
Now, remembering the paths from earlier, we can start to create the test cases. We can take the first path and create a [Fact] for it. You want to give your test case a meaningful name. I usually use the style of Given_When_Then to describe what is expected to happen.
Next, I usually add 3 comment sections to my test case:
// arrange
// act
// assert
This allows me to clearly see which parts of the test are doing what.
// act
await _inventoryServiceBusFunction.Run(inventoryRequest);
Next, I would fill in the \\ act section because this will tell me (via Intellisense) what I need to arrange. e.g. above, when hovering my mouse over the Run method, I can see that I need to pass an instance of InventoryRequest.
// arrange
var inventoryRequest = new InventoryRequest
{
Name = "abc123",
Quantity = 2,
Tags = new List<string>
{
"foo"
}
};
In the \\ arrange section, initialize an instance of the InventoryRequest class and set the properties. This can be any data as we aren't really interested in the data itself but more what happens when the code runs.
if (inventoryRequest != null)
{
// path 1
await _inventoryService.ProcessRequest(inventoryRequest);
_logger.LogInformation("HBSI Inventory Queue trigger function processed.");
}
Lastly, the \\ assert section. Here, we want to make assertions on what we expect to happen given the set up of the test. So given the InventoryRequest is not null, we expect the if to evaluate to true and we expect the _inventoryService.ProcessRequest(inventoryRequest) method to be executed.
// assert
_mockInventoryService
.Verify(x => x.ProcessRequest(It.Is<InventoryRequest>(ir => ir.Name == inventoryRequest.Name
&& ir.Quantity == inventoryRequest.Quantity
&& ir.Tags.Contains(inventoryRequest.Tags[0]))));
In Moq, we can use the .Verify() method on the mock object to assert that the method was called. We can use the It.Is<T> syntax to make assertions on the data that is passed to the method.
Here is the full test case for path 1:
[Fact]
public async Task GivenInventoryRequest_WhenFunctionRuns_ThenInventoryServiceProcessesRequest()
{
// arrange
var inventoryRequest = new InventoryRequest
{
Name = "abc123",
Quantity = 2,
Tags = new List<string>
{
"foo"
}
};
// act
await _inventoryServiceBusFunction.Run(inventoryRequest);
// assert
_mockInventoryService
.Verify(x => x.ProcessRequest(It.Is<InventoryRequest>(ir => ir.Name == inventoryRequest.Name
&& ir.Quantity == inventoryRequest.Quantity
&& ir.Tags.Contains(inventoryRequest.Tags[0]))));
}
Then for path 2, you are setting up the test so that the else condition is executed.
[Fact]
public async Task GivenInventoryRequestIsNull_WhenFunctionRuns_ThenInventoryServiceDoesNotProcessRequest()
{
// arrange
InventoryRequest inventoryRequest = null;
// act
await _inventoryServiceBusFunction.Run(inventoryRequest);
// assert
_mockInventoryService
.Verify(x => x.ProcessRequest(It.IsAny<InventoryRequest>()), Times.Never);
}
Note - in the \\ assert here, I am asserting that the await _inventoryService.ProcessRequest(inventoryRequest) method is never called. This is because you want the test to fail in this scenario as the method should only be executed in the if condition. You may also choose to verify that the logger method is called with the correct message.

Related

Build/Test verification for missing implementations of query/commands in MediatR

We're using MediatR heavily in our LoB application, where we use the command & query pattern.
Often, to continue in development, we make the commands and the queries first, since they are simple POCOs.
This sometimes can lead to forgetting to create an actual command handler/query handler. Since there's no compile-time validation if there is actually an implementation for the query/command, I was wondering what would be the best approach to see if there's an implementation and throw an error if not, before being able to merge into master.
My idea so far:
Create a two tests, one for queries and one for commands, that scan all the assemblies for an implementation of IRequest<TResponse>, and then scan the assemblies for an associated implementation of IRequestHandler<TRequest, TResponse>
But this would make it still required to first execute the tests (which is happening in the build pipeline), which still depends on the developer manually executing the tests (or configuring VS to do so after compile).
I don't know if there's a compile-time solution for this, and even if that would be a good idea?
We've gone with a test (and thus build-time) verification;
Sharing the code here for the actual test, which we have once per domain project.
The mediator modules contain our query/command(handler) registrations, the infrastructure modules contain our handlers of queries;
public class MissingHandlersTests
{
[Fact]
public void Missing_Handlers()
{
List<Assembly> assemblies = new List<Assembly>();
assemblies.Add(typeof(MediatorModules).Assembly);
assemblies.Add(typeof(InfrastructureModule).Assembly);
var missingTypes = MissingHandlersHelpers.FindUnmatchedRequests(assemblies);
Assert.Empty(missingTypes);
}
}
The helper class;
public class MissingHandlersHelpers
{
public static IEnumerable<Type> FindUnmatchedRequests(List<Assembly> assemblies)
{
var requests = assemblies.SelectMany(x => x.GetTypes())
.Where(t => t.IsClass && t.IsClosedTypeOf(typeof(IRequest<>)))
.ToList();
var handlerInterfaces = assemblies.SelectMany(x => x.GetTypes())
.Where(t => t.IsClass && (t.IsClosedTypeOf(typeof(IRequestHandler<>)) || t.IsClosedTypeOf(typeof(IRequestHandler<,>))))
.SelectMany(t => t.GetInterfaces())
.ToList();
List<Type> missingRegistrations = new List<Type>();
foreach(var request in requests)
{
var args = request.GetInterfaces().Single(i => i.IsClosedTypeOf(typeof(IRequest<>)) && i.GetGenericArguments().Any() && !i.IsClosedTypeOf(typeof(ICacheableRequest<>))).GetGenericArguments().First();
var handler = typeof(IRequestHandler<,>).MakeGenericType(request, args);
if (handler == null || !handlerInterfaces.Any(x => x == handler))
missingRegistrations.Add(handler);
}
return missingRegistrations;
}
}
If you are using .Net Core you could the Microsoft.AspNetCore.TestHost to create an endpoint your tests could hit. Sort of works like this:
var builder = WebHost.CreateDefaultBuilder()
.UseStartup<TStartup>()
.UseEnvironment(EnvironmentName.Development)
.ConfigureTestServices(
services =>
{
services.AddTransient((a) => this.SomeMockService.Object);
});
this.Server = new TestServer(builder);
this.Services = this.Server.Host.Services;
this.Client = this.Server.CreateClient();
this.Client.BaseAddress = new Uri("http://localhost");
So we mock any http calls (or any other stuff we want) but the real startup gets called.
And our tests would be like this:
public SomeControllerTests(TestServerFixture<Startup> testServerFixture)
: base(testServerFixture)
{
}
[Fact]
public async Task SomeController_Returns_Titles_OK()
{
var response = await this.GetAsync("/somedata/titles");
response.StatusCode.Should().Be(HttpStatusCode.OK);
var responseAsString = await response.Content.ReadAsStringAsync();
var actualResponse = Newtonsoft.Json.JsonConvert.DeserializeObject<IEnumerable<string>>(responseAsString);
actualResponse.Should().NotBeNullOrEmpty();
actualResponse.Should().HaveCount(20);
}
So when this test runs, if you have not registered your handler(s) it will fail! We use this to assert what we need (db records added, response what we expect etc) but it is a nice side effect that forgetting to register your handler gets caught at the test stage!
https://fullstackmark.com/post/20/painless-integration-testing-with-aspnet-core-web-api

Checking exceptions with TestCaseData parameters

I'm using NUnit 3 TestCaseData objects to feed test data to tests and Fluent Assertions library to check exceptions thrown.
Typically my TestCaseData object contains two parameters param1 and param2 used to create an instance of some object within the test and upon which I then invoke methods that should/should not throw exceptions, like this:
var subject = new Subject(param1, param2);
subject.Invoking(s => s.Add()).Should().NotThrow();
or
var subject = new Subject(param1, param2);
subject.Invoking(s => s.Add()).Should().Throw<ApplicationException>();
Is there a way to pass NotThrow() and Throw<ApplicationException>() parts as specific conditions in a third parameter in TestCaseData object to be used in the test? Basically I want to parameterize the test's expected result (it may be an exception of some type or no exception at all).
[TestCaseData] is meant for Test Case Data, not for assertions methods.
I would keep the NotThrow and Throw in separate tests to maintain readability.
If they share a lot of setup-logic, I would extract that into shared methods to reduce the size of the test method bodies.
TestCaseData accepts compile time values, whereas TestCaseSource generates them on runtime, which would be necessary to use Throw and NotThrow.
Here's a way to do it by misusing TestCaseSource.
The result is an unreadable test method, so please don't use this anywhere.
Anyway here goes:
[TestFixture]
public class ActionTests
{
private static IEnumerable<TestCaseData> ActionTestCaseData
{
get
{
yield return new TestCaseData((Action)(() => throw new Exception()), (Action<Action>)(act => act.Should().Throw<Exception>()));
yield return new TestCaseData((Action)(() => {}), (Action<Action>)(act => act.Should().NotThrow()));
}
}
[Test]
[TestCaseSource(typeof(ActionTests), nameof(ActionTestCaseData))]
public void Calculate_Success(Action act, Action<Action> assert)
{
assert(act);
}
}
I ended up using this:
using ExceptionResult = Action<System.Func<UserDetail>>;
[Test]
[TestCaseSource(typeof(UserEndpointTests), nameof(AddUserTestCases))]
public void User_Add(string creatorUsername, Role role, ExceptionResult result)
{
var endpoint = new UserEndpoint(creatorUsername);
var person = GeneratePerson();
var request = GenerateCreateUserRequest(person, role);
// Assertion comes here
result(endpoint.Invoking(e => e.Add(request)));
}
private static IEnumerable AddUserTestCases
{
get
{
yield return new TestCaseData(TestUserEmail, Role.User, new ExceptionResult(x => x.Should().Throw<ApplicationException>())
.SetName("{m} (Regular User => Regular User)")
.SetDescription("User with Regular User role cannot add any users.");
yield return new TestCaseData(TestAdminEmail, Role.Admin, new ExceptionResult(x => x.Should().NotThrow())
)
.SetName("{m} (Admin => Admin)")
.SetDescription("User with Admin role adds another user with Admin role.");
}
}
No big issues with readability, besides, SetName() and SetDescription() methods in the test case source help with that.

CakePHP 3 integeration test without a model/entity

I'm trying to test a controller function...
I want to test a couple of things:
A) That it throws an invalid request exception when a certain argument is used
B) That it works correctly when the correct argument is made.
I've written some unit tests and those all seem cool. The only documentation I can find on this is http://book.cakephp.org/3.0/en/development/testing.html but the integration testing, whilst interesting and potentially useful, I can't seem to get how I am suppose to be implement it without using fixtures (which I don't want to do necessarily).
namespace App\Test\TestCase\Controller;
use Cake\ORM\TableRegistry;
use Cake\TestSuite\IntegrationTestCase;
class MusterControllerTest extends IntegrationTestCase
{
public function testIn()
{
$this->in();
$this->setExpectedException('Invalid request');
}
}
class MusterController extends AppController {
public $helpers = array('Address');
public function beforeFilter(Event $event) {
$this->Auth->allow('in');
$this->layout = 'blank';
$this->autoRender = false;
$this->loadComponent('Rule');
parent::beforeFilter($event);
}
public function in($param = null){
if (!$this->request->is(array('post', 'put')) || $this->request->data('proc')!='yada' || is_null($param)){
throw new NotFoundException(__('Invalid request'));
}
$this->processRequest($this->request->data('hit'), $this->request->data('proc'), $param);
}
Pointers appreciated.
The IntegrationTestCase class, as its name implies, is meant for integration testing. That is, it will be testing the interaction between the controller and any other class it uses for rendering a response.
There is another way of testing controller, which is more difficult to accomplish, but allows you to test controller methods in isolation:
public function testMyControllerMethod()
{
$request = $this->getMock('Cake\Network\Request');
$response = $this->getMock('Cake\Network\Response');
$controller = new MyController($request, $response);
$controller->startupProcess();
// Add some assertions and expectations here
// For example you could assing $controller->TableName to a mock class
// Call the method you want to test
$controller->myMethod('param1', 'param2');
}

Use MEF to compose parts but postpone the creation of the parts

As explained in these questions I'm trying to build an application that consists of a host and multiple task processing clients. With some help I have figured out how to discover and serialize part definitions so that I could store those definitions without having to have the actual runtime type loaded.
The next step I want to achieve (or next two steps really) is that I want to split the composition of parts from the actual creation and connection of the objects (represented by those parts). So if I have a set of parts then I would like to be able to do the following thing (in pseudo-code):
public sealed class Host
{
public CreationScript Compose()
{
CreationScript result;
var container = new DelayLoadCompositionContainer(
s => result = s);
container.Compose();
return script;
}
public static void Main()
{
var script = Compose();
// Send the script to the client application
SendToClient(script);
}
}
// Lives inside other application
public sealed class Client
{
public void Load(CreationScript script)
{
var container = new ScriptLoader(script);
container.Load();
}
public static void Main(string scriptText)
{
var script = new CreationScript(scriptText);
Load(script);
}
}
So that way I can compose the parts in the host application, but actually load the code and execute it in the client application. The goal is to put all the smarts of deciding what to load in one location (the host) while the actual work can be done anywhere (by the clients).
Essentially what I'm looking for is some way of getting the ComposablePart graph that MEF implicitly creates.
Now my question is if there are any bits in MEF that would allow me to implement this kind of behaviour? I suspect that the provider model may help me with this but that is a rather large and complex part of MEF so any guidelines would be helpful.
From lots of investigation it seems that is not possible to separate the composition process from the instantiation process in MEF so I have had to create my own approach for this problem. The solution assumes that the scanning of plugins results in having the type, import and export data stored somehow.
In order to compose parts you need to keep track of each part instance and how it is connected to other part instances. The simplest way to do this is to make use of a graph data structure that keeps track of which import is connected to which export.
public sealed class CompositionCollection
{
private readonly Dictionary<PartId, PartDefinition> m_Parts;
private readonly Graph<PartId, PartEdge> m_PartConnections;
public PartId Add(PartDefinition definition)
{
var id = new PartId();
m_Parts.Add(id, definition);
m_PartConnections.AddVertex(id);
return id;
}
public void Connect(
PartId importingPart,
MyImportDefinition import,
PartId exportingPart,
MyExportDefinition export)
{
// Assume that edges point from the export to the import
m_PartConnections.AddEdge(
new PartEdge(
exportingPart,
export,
importingPart,
import));
}
}
Note that before connecting two parts it is necessary to check if the import can be connected to the export. In other cases MEF does that but in this case we'll need to do that ourselves. An example of how to approach that is:
public bool Accepts(
MyImportDefinition importDefinition,
MyExportDefinition exportDefinition)
{
if (!string.Equals(
importDefinition.ContractName,
exportDefinition.ContractName,
StringComparison.OrdinalIgnoreCase))
{
return false;
}
// Determine what the actual type is we're importing. MEF provides us with
// that information through the RequiredTypeIdentity property. We'll
// get the type identity first (e.g. System.String)
var importRequiredType = importDefinition.RequiredTypeIdentity;
// Once we have the type identity we need to get the type information
// (still in serialized format of course)
var importRequiredTypeDef =
m_Repository.TypeByIdentity(importRequiredType);
// Now find the type we're exporting
var exportType = ExportedType(exportDefinition);
if (AvailableTypeMatchesRequiredType(importRequiredType, exportType))
{
return true;
}
// The import and export can't directly be mapped so maybe the import is a
// special case. Try those
Func<TypeIdentity, TypeDefinition> toDefinition =
t => m_Repository.TypeByIdentity(t);
if (ImportIsCollection(importRequiredTypeDef, toDefinition)
&& ExportMatchesCollectionImport(
importRequiredType,
exportType,
toDefinition))
{
return true;
}
if (ImportIsLazy(importRequiredTypeDef, toDefinition)
&& ExportMatchesLazyImport(importRequiredType, exportType))
{
return true;
}
if (ImportIsFunc(importRequiredTypeDef, toDefinition)
&& ExportMatchesFuncImport(
importRequiredType,
exportType,
exportDefinition))
{
return true;
}
if (ImportIsAction(importRequiredTypeDef, toDefinition)
&& ExportMatchesActionImport(importRequiredType, exportDefinition))
{
return true;
}
return false;
}
Note that the special cases (like IEnumerable<T>, Lazy<T> etc.) require determining if the importing type is based on a generic type which can be a bit tricky.
Once all the composition information is stored it is possible to do the instantiation of the parts at any point in time because all the required information is available. Instantiation requires a generous helping of reflection combined with the use of the trusty Activator class and will be left as an exercise to the reader.

Cannot make Rhino Mocks return the correct type

I just started a new job and one of the first things I've been asked to do is build unit tests for the code base (the company I now work for is committed to automated testing but they do mostly integration tests and the build takes forever to complete).
So everything started nicely, I started to break dependencies here and there and started writing isolated unit tests but now I'm having an issue with rhino mocks not being able to handle the following situation:
//authenticationSessionManager is injected through the constructor.
var authSession = authenticationSessionManager.GetSession(new Guid(authentication.SessionId));
((IExpirableSessionContext)authSession).InvalidateEnabled = false;
The type that the GetSession method returns is SessionContext and as you can see it gets casted into the IExpirableSessionContext interface.
There is also an ExpirableSessionContext object that inherits from SessionContext and implements the IExpirableSessionContext interface.
The way the session object is stored and retrieved is shown in the following snippet:
private readonly Dictionary<Guid, SessionContext<TContent>> Sessions= new Dictionary<Guid, SessionContext<TContent>>();
public override SessionContext<TContent> GetSession(Guid sessionId)
{
var session = base.GetSession(sessionId);
if (session != null)
{
((IExpirableSessionContext)session).ResetTimeout();
}
return session;
}
public override SessionContext<TContent> CreateSession(TContent content)
{
var session = new ExpirableSessionContext<TContent>(content, SessionTimeoutMilliseconds, new TimerCallback(InvalidateSession));
Sessions.Add(session.Id, session);
return session;
}
Now my problem is when I mock the call to GetSession, even though I'm telling rhino mocks to return an ExpirableSessionContext<...> object, the test throws an exception on the line where it's being casted into the IExpirableSession interface, here is the code in my test (I know I'm using the old syntax, please bear with me on this one):
Mocks = new MockRepository();
IAuthenticationSessionManager AuthenticationSessionMock;
AuthenticationSessionMock = Mocks.DynamicMock<IAuthenticationSessionManager>();
var stationAgentManager = new StationAgentManager(AuthenticationSessionMock);
var authenticationSession = new ExpirableSessionContext<AuthenticationSessionContent>(new AuthenticationSessionContent(AnyUserName, AnyPassword), 1, null);
using (Mocks.Record())
{
Expect.Call(AuthenticationSessionMock.GetSession(Guid.NewGuid())).IgnoreArguments().Return(authenticationSession);
}
using (Mocks.Playback())
{
var result = stationAgentManager.StartDeploymentSession(anyAuthenticationCookie);
Assert.IsFalse(((IExpirableSessionContext)authenticationSession).InvalidateEnabled);
}
I think it makes sense the cast fails since the method returns a different kind of object and the production code works since the session is being created as the correct type and stored in a dictionary which is code the test will never run since it is being mocked.
How can I set this test up to run correctly?
Thank you for any help you can provide.
Turns out everything is working fine, the problem was that on the setup for each test there is an expectation on that method call:
Expect.Call(AuthenticationSessionMock.GetSession(anySession.Id)).Return(anySession).Repeat.Any();
So this expectation was overriding the one I set on my own test. I had to take this expectation out of the setup method, include it on a helper method and have all the other tests use this one instead.
Once out of the way, my test started working.