Multiple Unit Tests' Running With multiple Connections To Database - testing

In the Asp.Net core project, there are several unit tests used services for connecting to the database and bring real data, so multiple concurrent connections are created. When these tests run, I received this error
A second operation started on this context before a previous operation completed. Any instance members are not guaranteed to be thread safe.
but I do not know how can I fix this error without using async ways.

In unit tests you should not use connection to a DB. You should use mockups and create your own data to test with.
Use the NuGet package moqto easily create mockup objects.
Example of using the mockup objects:
public void Test_Login()
{
Mock<IDatabase> mockDatabase = new Mock<IDatabase>();
mockDatabase.Setup(p => p.GetAccountAsync(It.IsAny<string>()))
.Returns((string givenEmail) => Task.FromResult(new Account(1, "test", givenEmail, "123", "$2b$10$pfsnDQ3IWuY/zER/uBQpedvRFntMNHGOGhOSpABKZ7bwS", false)));
Mock<IConfiguration> mockConfiguration = new Mock<IConfiguration>();
Mock<IHostingEnvironment> mockHostingEnvironment = new Mock<IHostingEnvironment>();
AccountService accountService = new AccountService(mockDatabase.Object, mockConfiguration.Object, mockHostingEnvironment.Object);
LoginViewModel loginViewModel = new LoginViewModel
{
EmailLogin = "test#test.com",
PasswordLogin = "s"
};
Task<Account> account = accountService.LoginAsync(loginViewModel);
Assert.NotNull(account.Result);
Assert.Equal(loginViewModel.EmailLogin, account.Result.Email);
}
In the example above I manually set the value of the mockup database that the service method will use to retrieve the account and compare the returned email with the given email.

Related

ABP IO Code sample for run multiple databases for multi-tenancy

Please notice that I am talking about ABP.io, not the Boilerplate framework.
The in-build free module Tenant-Management is developed to work with multiple tenants and a unique database. however, the documentation says that the framework has a built-in friendly way to use the multiple database approach, including:
new dbContext
database migration and seeding
Connection String service
I am new in ABP IO, and I want a sample that employs the framework elements to implement a single database for every tenant.
I get started by overriding the tenant create sync method of the tenant management module as follows.
[Dependency(ReplaceServices = true)]
[ExposeServices(typeof(ITenantAppService), typeof(TenantAppService), typeof(ExtendedTenantManagementAppService))]
public class ExtendedTenantManagementAppService : TenantAppService
{
public ExtendedTenantManagementAppService(ITenantRepository tenantRepository,
ITenantManager tenantManager,
IDataSeeder dataSeeder) : base(tenantRepository, tenantManager, dataSeeder)
{
LocalizationResource = typeof(WorkspacesManagerResource);
ObjectMapperContext = typeof(WorkspacesManagerApplicationModule);
}
public override async Task<TenantDto> CreateAsync(TenantCreateDto input)
{
var tenant = await TenantManager.CreateAsync(input.Name);
input.MapExtraPropertiesTo(tenant);
await TenantRepository.InsertAsync(tenant);
await CurrentUnitOfWork.SaveChangesAsync();
using (CurrentTenant.Change(tenant.Id, tenant.Name))
{
//TODO: Handle database creation?
// create database
// migrate
// seed with essential data
await DataSeeder.SeedAsync(
new DataSeedContext(tenant.Id)
.WithProperty("AdminEmail", input.AdminEmailAddress)
.WithProperty("AdminPassword", input.AdminPassword)
);
}
return ObjectMapper.Map<Tenant, TenantDto>(tenant);
}
}
Any code sample?

Using UserManager not working inside Timer

In my project I am trying to get a user based on it's email adress every second with the UserManager but when I do this I get the following error Cannot access a disposed object Object name: 'UserManager1, but this is when I do it inside of a Timer(). If I just do it once there is no problem, how can I fix this? This timer is inside a class that is being called by a SignalR Hub.
Code:
Timer = new System.Threading.Timer(async (e) =>
{
IEnumerable<Conversation> conversations = await _conversationsRepo.GetAllConversationsForUserEmailAsync(userMail);
List<TwilioConversation> twilioConversations = new List<TwilioConversation>();
foreach (Conversation conversation in conversations)
{
TwilioConversation twilioConversation = await _twilioService.GetConversation(conversation.TwilioConversationID);
twilioConversation.Messages = await _twilioService.GetMessagesForConversationAsync(conversation.TwilioConversationID);
twilioConversation.ParticipantNames = new List<string>();
List<TwilioParticipant> participants = await _twilioService.GetParticipantsForConversationAsync(conversation.TwilioConversationID);
foreach (TwilioParticipant participant in participants)
{
User user = await _userManager.FindByEmailAsync(participant.Email);
twilioConversation.ParticipantNames.Add(user.Name);
}
twilioConversations.Add(twilioConversation);
}
}, null, startTimeSpan, periodTimeSpan);
UserManager along with quite a few other types is a service that has a scoped lifetime. This means that they are only valid within the lifetime of a single request.
That also means that holding on to an instance for longer is not a safe thing to do. In this particular example, UserManager depends on the UserStore which has a dependency on a database connection – and those will definitely be closed when the request has been completed.
If you need to run something outside of the context of a request, for example in a background thread, or in your case in some timed execution, then you should create a service scope yourself and retrieve a fresh instance of the dependency you rely on.
To do that, inject a IServiceScopeFactory and then use that to create the scope within your timer code. This also applies to all other scoped dependencies, e.g. your repository which likely requires a database connection as well:
Timer = new System.Threading.Timer(async (e) =>
{
using (var scope = serviceScopeFactory.CreateScope())
{
var conversationsRepo = scope.ServiceProvider.GetService<ConversionsRepository>();
var userManager = scope.ServiceProvider.GetService<UserManager<User>>();
// do stuff
}
}, null, startTimeSpan, periodTimeSpan);

Managing CosmosDb Session Consistency levels with Session Token in web environment

My environment is ASP.NET Core 2.x accessing CosmosDb (aka DocumentDb) with the .NET SDK.
The default consistency level of my collection is set to "Session". For my use-case I need a single authenticated web user to always have consistent data in terms of reads/writes between web requests.
I have some CosmosDB Repository logic that is made available to my controller logic via ASP.NET Core Singleton dependency injection as such:
services.AddSingleton<DocumentDBRepository, DocumentDBRepository>(x =>
new DocumentDBRepository(
WebUtil.GetMachineConfig("DOCDB_ENDPOINT", Configuration),
WebUtil.GetMachineConfig("DOCDB_KEY", Configuration),
WebUtil.GetMachineConfig("DOCDB_DB", Configuration),
"MyCollection",
maxDocDbCons));
DocumentDBRespository creates a cosmos client like so:
public DocumentDBRepository(string endpoint, string authkey, string database, string collection, int maxConnections)
{
_Collection = collection;
_DatabaseId = database;
_Client = new DocumentClient(new Uri(endpoint), authkey,
new ConnectionPolicy()
{
MaxConnectionLimit = maxConnections,
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp,
RetryOptions = new RetryOptions()
{
MaxRetryAttemptsOnThrottledRequests = 10
}
});
_Client.OpenAsync().Wait();
CreateDatabaseIfNotExistsAsync().Wait();
CreateCollectionIfNotExistsAsync().Wait();
}
As far as I understand that means one CosmosDB client per Web App server. I do have multiple web app servers, so a single user might hit the CosmosDB from multiple AppServers and different CosmosDb clients.
Before a user interacts with the ComosDB, I check their session object for a CosmosDb SessionToken, like so:
string docDbSessionToken = HttpContext.Session.GetString("StorageSessionToken");
Then, when writing a document for example, the method looks something like so:
public async Task<Document> CreateItemAsync<T>(T item, Ref<string> sessionTokenOut, string sessionTokenIn = null)
{
ResourceResponse<Document> response = null;
if (string.IsNullOrEmpty(sessionTokenIn))
{
response = await _Client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(_DatabaseId, _Collection), item);
}
else
{
response = await _Client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(_DatabaseId, _Collection), item, new RequestOptions() { SessionToken = sessionTokenIn });
}
sessionTokenOut.Value = response.SessionToken;
Document created = response.Resource;
return created;
}
The idea being that if we have a session token, we pass one in and use it. If we don't have one, just create the document and then return the newly created session token back to the caller. This works fine...
Except, I'm unclear as to why when I do pass in a session token, I get a DIFFERENT session token back. In other words, when _Client.CreateDocumentAsync returns, response.SessionToken is always different from parameter sessionTokenIn.
Does that mean I should be using the new session token from that point on for that user? Does it mean I should ignore the new session token and use the initial session token?
How long do one of these "sessions" even last? Are they sessions in the traditional sense?
Ultimately, I just need to make sure that the same user can always read their writes, regardless of which AppServer they connect with or how many other users are currently using the DB.
I guess the confusion here is on what a session is?
In most scenarios/frameworks treat session as static identifier (correlation), where as with cosmos the sessionToken is dynamic (kind of bookmark/representation of cosmos db state, which changes with writes). Naming it as 'sessionToken' might be root of the confusion.
In this specific scenario, you should use the "returned sessiontoken" from cosmos API's.

RavenDB IsOperationAllowedOnDocument not supported in Embedded Mode

RavenDB throws InvalidOperationException when IsOperationAllowedOnDocument is called using embedded mode.
I can see in the IsOperationAllowedOnDocument implementation a clause checking for calls in embedded mode.
namespace Raven.Client.Authorization
{
public static class AuthorizationClientExtensions
{
public static OperationAllowedResult[] IsOperationAllowedOnDocument(this ISyncAdvancedSessionOperation session, string userId, string operation, params string[] documentIds)
{
var serverClient = session.DatabaseCommands as ServerClient;
if (serverClient == null)
throw new InvalidOperationException("Cannot get whatever operation is allowed on document in embedded mode.");
Is there a workaround for this other than not using embedded mode?
Thanks for your time.
I encountered the same situation while writing some unit tests. The solution James provided worked; however, it resulted in having one code path for the unit test and another path for the production code, which defeated the purpose of the unit test. We were able to create a second document store and connect it to the first document store which allowed us to then access the authorization extension methods successfully. While this solution would probably not be good for production code (because creating Document Stores is expensive) it works nicely for unit tests. Here is a code sample:
using (var documentStore = new EmbeddableDocumentStore
{ RunInMemory = true,
UseEmbeddedHttpServer = true,
Configuration = {Port = EmbeddedModePort} })
{
documentStore.Initialize();
var url = documentStore.Configuration.ServerUrl;
using (var docStoreHttp = new DocumentStore {Url = url})
{
docStoreHttp.Initialize();
using (var session = docStoreHttp.OpenSession())
{
// now you can run code like:
// session.GetAuthorizationFor(),
// session.SetAuthorizationFor(),
// session.Advanced.IsOperationAllowedOnDocument(),
// etc...
}
}
}
There are couple of other items that should be mentioned:
The first document store needs to be run with the UseEmbeddedHttpServer set to true so that the second one can access it.
I created a constant for the Port so it would be used consistently and ensure use of a non reserved port.
I encountered this as well. Looking at the source, there's no way to do that operation as written. Not sure if there's some intrinsic reason why since I could easily replicate the functionality in my app by making a http request directly for the same info:
HttpClient http = new HttpClient();
http.BaseAddress = new Uri("http://localhost:8080");
var url = new StringBuilder("/authorization/IsAllowed/")
.Append(Uri.EscapeUriString(userid))
.Append("?operation=")
.Append(Uri.EscapeUriString(operation)
.Append("&id=").Append(Uri.EscapeUriString(entityid));
http.GetStringAsync(url.ToString()).ContinueWith((response) =>
{
var results = _session.Advanced.DocumentStore.Conventions.CreateSerializer()
.Deserialize<OperationAllowedResult[]>(
new RavenJTokenReader(RavenJToken.Parse(response.Result)));
}).Wait();

AzMan API returns invalid data with high load

I have a WCF service that calls the Authorization manager (AzMan) API - which is a COM interface. I use the following code to get a list of roles for a given user account:
public string[] GetRoleNamesForUser(string appName, SecurityIdentifier userSID)
{
m_azManStore.UpdateCache(null);
IAzApplication app = GetApplication(appName);
List<string> userRoles = new List<string>();
if (userSID != null)
{
IAzClientContext context = app.InitializeClientContextFromStringSid(userSID.ToString(), 1, null);
object[] roles = (object[])context.GetRoles("");
foreach (string uRole in roles)
{
userRoles.Add(uRole);
}
Marshal.FinalReleaseComObject(context);
}
return userRoles.ToArray();
}
This code works fine most of the time. However, while load testing (always using the same userSID), this code will sometimes return an empty array for the list of roles. Does AzMan have a problem with heavy load or is there something I am not doing right with regaurd to the AzMan COM object or something?
When using the AzMan COM object you must use Marshal.FinalReleaseCOMObject(object) to release resources. A memory leak is possible if this is not done. I had to wrap the AzMan store in a disposable class so that each call would open AzMan, use it then close it. The result is a slower, but more stable, application.
Take a look at this SO question for more details