Saga error nservicebus using raven db persistence - nservicebus

I have two messages , clientChangeMessage( responsible for creating the client) and clientContractChangeMEssage( responsible for the booking details of the client). Now in my database a client cannot be created until it has the client contract and vice-versa. On my local system everything is working fine i.e. if get a client change message first i store it in the saga and wait for the client contract message and when that arrives the saga executes both the messages. But on my testers machine when the client change message comes it gets stored in the saga but when a client contract change comes the saga does not find the client change saga and hence creates another saga. I have tried it with the exact same messages that my tester has tried ,it works on my machine, and am unable to figure out what might be going wrong. I am using raven db persistence. (Sorry i could not think of pasting any code for this)
ClientSagaState
public class ClientSagaState:IContainSagaData
{
#region NserviceBus
public Guid Id { get; set; }
public string Originator { get; set; }
public string OriginalMessageId { get; set; }
#endregion
public Guid ClientRef { get; set; }
public ClientMessage ClientChangeMessage { get; set; }
public ClientContractChangeMessage ClientContractChange { get; set; }
}
public class ClientSaga:Saga<ClientSagaState>,
IAmStartedByMessages<ClientChangeMessage>,
IAmStartedByMessages<ClientContractChangeMessage>
{
public override void ConfigureHowToFindSaga()
{
ConfigureMapping<ClientChangeMessage>(s => s.ClientRef, m => m.EntityRef);
ConfigureMapping<ClientContractChangeMessage>(s => s.ClientRef, m => m.PrimaryEntityRef);
}
public void Handle(ClientChangeMessage message)
{
if (BusRefTranslator.GetLocalRef(EntityTranslationNames.ClientChange, message.EntityRef.Value) != null)
{
GetHandler<ClientChangeMessage>().Handle(message);
CompleteTheSaga();
return;
}
HandleServiceUserChangeAndDependencies(message);
//MarkAsComplete();
CompleteTheSaga();
}
public void Handle(ClientContractChangeMessage message)
{
var state=this.Data;
//Some handling logic
//Check if client is not in database then store the state
state.ClientContractChange=message;
state.ClientRef =message.PrimaryEntityRef;
//if client is in the data base then
MarkAsComplete();
}
Thanks,

Because you are mapping to the saga data via the ClientRef property, you need to tell the persistence (Raven in this case) that this property is unique. What is probably happening is that, in some cases (it comes down to a race condition) the query done on the Raven index by the second message retrieves stale data, assumes there is no saga data, and creates new.
This should fix your issue:
[Unique]
public Guid ClientRef { get; set; }
With this information, the Raven saga persister will create an additional document based on this property (because loading by Id in Raven is fully atomic) so that the second message will be sure to find it.
If you were using another persistence medium like NHibernate, the same attribute would be used to construct a unique index on that column.
Edit based on comment
The unique constraint document and your saga data will be fully consistent, so depending on timing of incoming messages, one of 3 things will happen.
The message is truly the first message to arrive and be processed, so no saga data is found, so it is created.
The message is the second to arrive, so it looks for the saga data, finds it, and processes successfully.
The 2nd message arrives very close to the first message, so they are both processing in separate threads at the same time. Both threads look in the saga data and find nothing, so they both begin to process. The one that finishes first commits successfully and saves its saga data. The one that finishes second attempts to save the saga data, but finds that while it's been working the other thread has moved its cheese, so Raven throws a concurrency exception. Your message goes back on the queue and is retried, and now that the saga data exists, the retry acts like Scenario #2.

Related

How to implement a Saga on Topos?

Topos it .NET Event Processing library, similar to Rebus. Unlike Rebus, it is not so much for messages, as for event processing.
Rebus supports Sagas out of the "box", including in terms of persistence, correlation and concurrency. How to implement a Saga on Topos?
If Topos supports Sagas, is there an example of a Saga implementation somewhere?
Topos does not have any kind of built-in sagas, unfortunately.
In Fleet Manager (the Rebus management app that comes with Rebus Pro, and the reason I made Topos) I made a saga-like event processor that uses MongoDB or LiteDB for persistence.
This implementation is completely proprietary though, as it's part of a commercial software product, and it's not quite generic enough to be suited for reuse. I can tell you a little bit about it here anyway, hopefully to give you some inspiration on how you could go about building something like it yourself. 🙂
The event processor is hosted in a Topos consumer, which dispatches all received events to a bunch of "projections", thus implementing the classic event sourced "left-fold" (current_state + event => new_state).
Fleet Manager has projections in two flavors: process managers (i.e. projections that cause other events to be emitted by issuing commands) and views. The two types combined would be what you call a "saga" 🙂
One possible view could be implemented like this (with lots of stuff removed for brevity):
public class QueueInstanceView : ViewInstance<InstancePerQueue>, IExpire, IHaveAccountId, IHaveQueueName, ICanBeHidden
{
public string AccountId { get; set; }
public string QueueName { get; set; }
public DateTime LastActivity { get; set; }
public bool Hidden { get; set; }
protected override void DispatchEvent(AuditEvent auditEvent)
{
if (auditEvent.Body is EntityHidden entityHidden)
{
QueueName ??= entityHidden.Id;
Hidden = !entityHidden.Reverse;
}
else
{
QueueName ??= auditEvent.GetQueueName();
}
LastActivity = auditEvent.GetTime();
}
}
Note how the view class inherits from the generic ViewInstance<> class, closing it with the InstancePerQueue type. The base class keeps track of the ID of the view instance and some other stuff used to implement idempotency, and then InstancePerQueue defines how events are mapped to view instances.
It looks like this:
public class InstancePerQueue : ViewLocator
{
public override string[] GetViewIds(AuditEvent auditEvent)
{
if (auditEvent.Body is EntityHidden entityHidden)
{
if (entityHidden.HasType(EntityTypeNames.Queue))
{
var accountId = auditEvent.GetAccountId();
return new[] { $"{accountId}/{entityHidden.Id}" };
}
return Array.Empty<string>();
}
var queueName = auditEvent.GetQueueNameOrNull();
if (queueName == null) return Array.Empty<string>();
var accountId = auditEvent.GetAccountId();
return new[] { $"{accountId}/{queueName}" };
}
}
thus correlating events with IDs on the form "/" (where "account" in Fleet Manager terminology is basically just an environment, i.e. the queue names get to IDENTIFY the queues within an account).
Of course lots of logic is then implemented in the projection implementations, but while it's lengthy, it's also fairly straightforward.
I hope that this could give you some inspiration on how you might want to approach building "sagas" for Topos. 🙂
Btw. I cannot take credit for this particular design. I was originally exposed to a design very similar to this back in 2013-2014 by Emil Krog Ingerslev, who came up with it for an event-sourced application we were building at d60.
I later imitated all of the moving parts to implement persistent projections for Cirqus, which we used for a couple of event-sourced apps.
And finally I made my current implementation for Fleet Manager, back in 2016 when I needed something similar, only without the aggregate root stuff present in Cirqus, and working on Kafka instead of normal databases.

How best to handle data fetching needed for FluentValidation

In the app I'm working on, I'm using Mediatr and its pipelines to handle database interaction, some minor business logic, validation, etc.
There's a few checks for things like access control I can handle in the pipeline, since I'm using a context object as described here https://jimmybogard.com/sharing-context-in-mediatr-pipelines/ to go from ASP.Net identity to a custom context object with user information and claims.
One problem I'm having is that since this application is multi-tenant, I need to ensure that even if an object exists, it belongs to that tenant, and the only way to be sure of that is to grab the object from the database and check it. It seems to me the validation shouldn't have side effects, so I don't want to rely on that to populate the context object. But then that pushes a bunch of validation down into the Mediatr handlers as they check for object existence, and so on, leading to a lot of repeated code. I don't really want to query the database multiple times since some queries can be expensive.
Another issue with doing the more complicated validation in the actual request handlers is getting what are essentially validation errors back out. Currently, if one of these checks fail I throw a ValidationException, which is then caught by middleware and turned into a ProblemDetails that's returned to the API caller. This is basically exceptions as flow control, and a validation failure really isn't "exceptional" anyhow.
The thoughts I'm having on how to solve this are:
Somewhere in the pipeline, when I'm building the context, include attempting to fetch the objects needed from the database. Validation then fails if any of these are null. This seems like it would make testing harder, as well as needing to decorate the requests somehow (or use reflection) so the pipeline can know to attempt to load these objects.
Have the queries in the validator, but use some sort of cache aware repository so when the same object is queried later, it's served from the cache, and not the database. The handlers would also use this cache aware repository (Currently the handlers interact directly with the EF Core DbContext to query). This then adds the issue of cache invalidation, which I'm going to have to handle at some point, anyhow (quite a few items are seldom modified). For testing, a dummy cache object can be injected that doesn't actually cache anything.
Make all the responses from requests implement an interface (or extend an abstract class) that has validation info, general success flags, etc. This can either be returned through the API directly, or have some pipeline that transforms failures into ProblemDetails. This would add some boilerplate to every response and handler, but avoids exceptions as flow control, and the caching/reflection issues in the other options.
Assume for 1 and 2 that any sort of race conditions are not an issue. Objects don't change owners, and things are seldom actually deleted from the database for auditing/accounting purposes.
I know there's no true one size fits all for problems like this, but I would like to know if there's additional options I'm missing, or any long term maintainability issues anyone with a similar pipeline has encountered if they went with one of these listed options.
We use MediatR IRequestPreProcessor for fetching data that we need both in RequestHandler and in FluentValidation validators.
RequestPreProcessor:
public interface IProductByIdBinder
{
int ProductId { get; }
ProductEntity Product { set; }
}
public class ProductByIdBinder<T> : IRequestPreProcessor<T> where T : IProductByIdBinder
{
private readonly IRepositoryReadAsync<ProductEntity> productRepository;
public ProductByIdBinder(IRepositoryReadAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public async Task Process(T request, CancellationToken cancellationToken)
{
request.Product = await productRepository.GetAsync(request.ProductId);
}
}
RequestHandler:
public class ProductDeleteCommand : IRequest, IProductByIdBinder
{
public ProductDeleteCommand(int id)
{
ProductId = id;
}
public int ProductId { get; }
public ProductEntity Product { get; set; }
private class ProductDeleteCommandHandler : IRequestHandler<ProductDeleteCommand>
{
private readonly IRepositoryAsync<ProductEntity> productRepository;
public ProductDeleteCommandHandler(
IRepositoryAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public Task<Unit> Handle(ProductDeleteCommand request, CancellationToken cancellationToken)
{
productRepository.Delete(request.Product);
return Unit.Task;
}
}
}
FluentValidation validator:
public class ProductDeleteCommandValidator : AbstractValidator<ProductDeleteCommand>
{
public ProductDeleteCommandValidator()
{
RuleFor(cmd => cmd)
.Must(cmd => cmd.Product != null)
.WithMessage(cmd => $"The product with id {cmd.ProductId} doesn't exist.");
}
}
I see nothing wrong with handling business logic validation in the handler layer.
Moreover, I do not think it is right to throw exceptions for them, as you said it is exceptions as flow control.
Introducing a cache seems like overkill for the use case too. The most reasonable option is the third IMHO.
Instead of implementing an interface you can use the nifty OneOf library and have something like
using HandlerResponse = OneOf<Success, NotFound, ValidationResponse>;
public class MediatorHandler : IRequestHandler<Command, HandlerResponse>
{
public async Task<HandlerResponse> Handle(
Command command,
CancellationToken cancellationToken)
{
Resource resource = await _userRepository
.GetResource(command.Id);
if (resource is null)
return new NotFound();
if (!resource.IsValid)
return new ValidationResponse(new ProblemDetails());
return new Success();
}
And then map it in your API Layer like
public async Task<IActionResult> PostAsync([FromBody] DummyRequest request)
{
HandlerResponse response = await _mediator.Send(
new Command(request.Id));
return response.Match<IActionResult>(
success => Created(),
notFound => NotFound(),
failed => new UnprocessableEntityResult(failed.ProblemDetails))
);
}

Save complex object to session ASP .NET CORE 2.0

I am quite new to ASP .NET core, so please help. I would like to avoid database round trip for ASP .NET core application. I have functionality to dynamically add columns in datagrid. Columns settings (visibility, enable, width, caption) are stored in DB.
So I would like to store List<,PersonColumns> on server only for actual session. But I am not able to do this. I already use JsonConvert methods to serialize and deserialize objects to/from session. This works for List<,Int32> or objects with simple properties, but not for complex object with nested properties.
My object I want to store to session looks like this:
[Serializable]
public class PersonColumns
{
public Int64 PersonId { get; set; }
List<ViewPersonColumns> PersonCols { get; set; }
public PersonColumns(Int64 personId)
{
this.PersonId = personId;
}
public void LoadPersonColumns(dbContext dbContext)
{
LoadPersonColumns(dbContext, null);
}
public void LoadPersonColumns(dbContext dbContext, string code)
{
PersonCols = ViewPersonColumns.GetPersonColumns(dbContext, code, PersonId);
}
public static List<ViewPersonColumns> GetFormViewColumns(SatisDbContext dbContext, string code, Int64 formId, string viewName, Int64 personId)
{
var columns = ViewPersonColumns.GetPersonColumns(dbContext, code, personId);
return columns.Where(p => p.FormId == formId && p.ObjectName == viewName).ToList();
}
}
I would like to ask also if my approach is not bad to save the list of 600 records to session? Is it better to access DB and load columns each time user wants to display the grid?
Any advice appreciated
Thanks
EDIT: I have tested to store in session List<,ViewPersonColumns> and it is correctly saved. When I save object where the List<,ViewPersonColumns> is property, then only built-in types are saved, List property is null.
The object I want to save in session
[Serializable]
public class UserManagement
{
public String PersonUserName { get; set; }
public Int64 PersonId { get; set; }
public List<ViewPersonColumns> PersonColumns { get; set; } //not saved to session??
public UserManagement() { }
public UserManagement(DbContext dbContext, string userName)
{
var person = dbContext.Person.Single(p => p.UserName == userName);
PersonUserName = person.UserName;
PersonId = person.Id;
}
/*public void PrepareUserData(DbContext dbContext)
{
LoadPersonColumns(dbContext);
}*/
public void LoadPersonColumns(DbContext dbContext)
{
LoadPersonColumns(dbContext, null);
}
public void LoadPersonColumns(DbContext dbContext, string code)
{
PersonColumns = ViewPersonColumns.GetPersonColumns(dbContext, code, PersonId);
}
public List<ViewPersonColumns> GetFormViewColumns(Int64 formId, string viewName)
{
if (PersonColumns == null)
return null;
return PersonColumns.Where(p => p.FormId == formId && p.ObjectName == viewName).ToList();
}
}
Save columns to the session
UserManagement userManagement = new UserManagement(_context, user.UserName);
userManagement.LoadPersonColumns(_context);
HttpContext.Session.SetObject("ActualPersonContext", userManagement);
HttpContext.Session.SetObject("ActualPersonColumns", userManagement.PersonColumns);
Load columns from the session
//userManagement build-in types are set. The PersonColumns is null - not correct
UserManagement userManagement = session.GetObject<UserManagement>("ActualPersonContext");
//The cols is filled from session with 600 records - correct
List<ViewPersonColumns> cols = session.GetObject<List<ViewPersonColumns>>("ActualPersonColumns");
Use list for each column is better than use database.
you can't create and store sessions in .net core like .net framework 4.0
Try Like this
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
//services.AddDbContext<GeneralDBContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
services.AddMvc().AddSessionStateTempDataProvider();
services.AddSession();
}
Common/SessionExtensions.cs
sing Microsoft.AspNetCore.Http;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace IMAPApplication.Common
{
public static class SessionExtensions
{
public static T GetComplexData<T>(this ISession session, string key)
{
var data = session.GetString(key);
if (data == null)
{
return default(T);
}
return JsonConvert.DeserializeObject<T>(data);
}
public static void SetComplexData(this ISession session, string key, object value)
{
session.SetString(key, JsonConvert.SerializeObject(value));
}
}
}
Usage
==> Create Session*
public IActionResult Login([FromBody]LoginViewModel model)
{
LoggedUserVM user = GetUserDataById(model.userId);
//Create Session with complex object
HttpContext.Session.SetComplexData("loggerUser", user);
return Json(new { status = result.Status, message = result.Message });
}
==> Get Session data*
public IActionResult Index()
{
//Get Session data
LoggedUserVM loggedUser = HttpContext.Session.GetComplexData<LoggedUserVM>("loggerUser");
}
Hope this is helpful. Good luck.
This is an evergreen post, and even though Microsoft has recommended serialisation to store the object in session - it is not a correct solution unless your object is readonly, I have a blog explaining all scenario here and i have even pointed out the issues in GitHub of Asp.Net Core in issue id 18159
Synopsis of the problems are here:
A. Serialisation isn't same as object, true it will help in distributed server scenario but it comes with a caveat that Microsoft have failed to highlight - that it will work without any unpredictable failures only when the object is meant to be read and not to be written back.
B. If you were looking for a read-write object in the session, everytime you change the object that is read from the session after deserialisation - it needs to be written back to the session again by calling serialisation - and this alone can lead to multiple complexities as you will need to either keep track of the changes - or keep writing back to session after each change in any property. In one request to the server, you will have scenarios where the object is written back multiple times till the response is sent back.
C. For a read-write object in the session, even on a single server it will fail, as the actions of the user can trigger multiple rapid requests to the server and not more than often system will find itself in a situation where the object is being serialised or deserialised by one thread and being edited and then written back by another, the result is you will end up with overwriting the object state by threads - and even locks won't help you much since the object is not a real object but a temporary object created by deserialisation.
D. There are issues with serialising complex objects - it is not just a performance hit, it may even fail in certain scenario - especially if you have deeply nested objects that sometimes refer back to itself.
The synopsis of the solution is here, full implementation along with code is in the blog link:
First implement this as a Cache object, create one item in IMemoryCache for each unique session.
Keep the cache in sliding expiration mode, so that each time it is read it revives the expiry time - thereby keeping the objects in cache as long as the session is active.
Second point alone is not enough, you will need to implement heartbeat technique - triggering the call to session every T minus 1 min or so from the javascript. (This we anyways used to do even to keep the session alive till the user is working on the browser, so it won't be any different
Additional Recommendations
A. Make an object called SessionManager - so that all your code related to session read / write sits in one place.
B. Do not keep very high value for session time out - If you are implementing heartbeat technique, even 3 mins of session time out will be enough.

MassTransit saga with Redis persistence gives Method Accpet does not have an implementation exception

I'm trying to add Redis persistence to my saga which is managing calls to a routing slip (as well as additional messages to other consumers depending on the result of the routing slip) in the hopes that it will solve another timeout issue I keep getting.
However, I get an error message which goes in to my saga_error queue in RabbitMQ.
The error shown in the message is:
Method 'Accept' in type 'GreenPipes.DynamicInternal.Automatonymous.State' from assembly 'AutomatonymousGreenPipes.DynamicInternalc83411641fad46798326d78fe60522c9, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' does not have an implementation
My correlation configuration code is:
InstanceState(s => s.CurrentState);
Event(() => RequestLinkEvent, x => x.CorrelateById(context => context.Message.LinkId).SelectId(y => y.Message.LinkId));
Event(() => LinkCreatedEvent, x => x.CorrelateById(context => context.Message.LinkId));
Event(() => CreateLinkGroupFailedEvent, x => x.CorrelateById(context => context.Message.LinkId));
Event(() => CreateLinkFailedEvent, x => x.CorrelateById(context => context.Message.LinkId));
Event(() => RequestLinkFailedEvent, x => x.CorrelateById(context => context.Message.LinkId));
Request(() => LinkRequest, x => x.UrlRequestId, cfg =>
{
cfg.ServiceAddress = new Uri($"{hostAddress}/{nameof(SelectUrlByPublicId)}");
cfg.SchedulingServiceAddress = new Uri($"{hostAddress}/{nameof(SelectUrlByPublicId)}");
cfg.Timeout = TimeSpan.FromSeconds(30);
});
The LinkId in the above code is always a unique Guid.
The issue seems to happen when the saga is reading back an event that has been sent from my routing slip (be it a success or failure event).
An example event interface that is not working is:
public interface ILinkCreated
{
Guid? CorrelationId { get; set; }
int DatabaseId { get; set; }
Guid LinkId { get; set; }
string LinkName { get; set; }
}
If I switch back to an InMemorySagaRepository everything works (locally). I've tried so many different combinations of things and have now hit a brick wall.
I've updated all packages to the latest version. I've also been checking my redis database and can see that the state machine instance goes in each time correctly.
I also saw that someone on Google groups had the same issue but there's no response to their post.
The problem here is request-response.
It works like this:
MT puts the request id to the saga state property UrlRequestId
The request is sent
You get a response back, the response contains the requestor address and the request id in its header
MT uses saga repository to find your instance using repo.Find(x => x.UrlRequestId == message.Headers.RequestId) (hence this is not the real code but this is what happens)
Redis (or any other KVS) doesn't support queries so we don't support queries in saga repositories too and you get "not implemented" exception
Your correlation specification for responses has no effect since Request always uses headers to find a saga instance for which the response belongs to.
You can workaround this by not using request-response and instead emit an event using context.Publish(new LinkCreatedEvent { ... , CorrelationId = context.Message.CorrelationId }) and using the usual correlation.
So to answer my own question and perhaps shine a light on my own stupidity. The issue was in fact being caused by how I had setup my StateMachineInstance.
Instead of having CurrentState of type State as below:
public State CurrentState {get; set;}
I should have specified it as a string as such:
public string CurrentState { get; set;}
Now it can be deserialized in to the object correctly. I suspect this may have been causing my timeout issues with the InMemorySagaRepository on my staging server too.

LightSwitch - bulk-loading all requests into one using a domain service

I need to group some data from a SQL Server database and since LightSwitch doesn't support that out-of-the-box I use a Domain Service according to Eric Erhardt's guide.
However my table contains several foreign keys and of course I want the correct related data to be shown in the table (just doing like in the guide will only make the key values show). I solved this by adding a Relationship to my newly created Entity like this:
And my Domain Service class looks like this:
public class AzureDbTestReportData : DomainService
{
private CountryLawDataDataObjectContext context;
public CountryLawDataDataObjectContext Context
{
get
{
if (this.context == null)
{
EntityConnectionStringBuilder builder = new EntityConnectionStringBuilder();
builder.Metadata =
"res://*/CountryLawDataData.csdl|res://*/CountryLawDataData.ssdl|res://*/CountryLawDataData.msl";
builder.Provider = "System.Data.SqlClient";
builder.ProviderConnectionString =
WebConfigurationManager.ConnectionStrings["CountryLawDataData"].ConnectionString;
this.context = new CountryLawDataDataObjectContext(builder.ConnectionString);
}
return this.context;
}
}
/// <summary>
/// Override the Count method in order for paging to work correctly
/// </summary>
protected override int Count<T>(IQueryable<T> query)
{
return query.Count();
}
[Query(IsDefault = true)]
public IQueryable<RuleEntryTest> GetRuleEntryTest()
{
return this.Context.RuleEntries
.Select(g =>
new RuleEntryTest()
{
Id = g.Id,
Country = g.Country,
BaseField = g.BaseField
});
}
}
public class RuleEntryTest
{
[Key]
public int Id { get; set; }
public string Country { get; set; }
public int BaseField { get; set; }
}
}
It works and all that, both the Country name and the Basefield loads with Autocomplete-boxes as it should, but it takes VERY long time. With two columns it takes 5-10 seconds to load one page.. and I have 10 more columns I haven't implemented yet.
The reason it takes so long time is because each related data (each Country and BaseField) requires one request. Loading a page looks like this in Fiddler:
This isn't acceptable at all, it should be a way of combining all those calls into one, just as it does when loading the same table without going through the Domain Service.
So.. that was a lot explaining, my question is: Is there any way I can make all related data load at once or improve the performance by any other way? It should not take 10+ seconds to load a screen.
Thanks for any help or input!s
My RIA Service queries are extremely fast, compared to not using them, even when I'm doing aggregation. It might be the fact that you're using "virtual relationships" (which you can tell by the dotted lines between the tables), that you've created using your RuleEntryTest entity.
Why is your original RuleEntry entity not related to both Country & BaseUnit in LightSwitch BEFORE you start creating your RIA entity?
I haven't used Fiddler to see what's happening, but I'd try creating "real" relationships, instead of "virtual" ones, & see if that helps your RIA entity's performance.