I am new to Apache camel. I have very common use case that i am struggling to configure camel route. The use case is to take execution context
Update database using execution context.
Then using event on the execution context, create a byte message and send over MQ.
Then in the next step again use execution context and perform event processing.
Update database using execution context.
So basically its kind of nested routes. In the below configuration I need to have access to the executionContext that executionController has created in the updateSchedulerState, sendNotification, processEvent and updateSchedulerState i.e. steps annotated as 1,2, 3 and 4 respectively.
from("direct:processMessage")
.routeId("MessageExecutionRoute")
.beanRef("executionController", "getEvent", true)
.beanRef("executionController", "updateSchedulerState", true) (1)
.beanRef("executionController", "sendNotification", true) (2)
.beanRef("messageTransformer", "transform", true)
.to("wmq:NOTIFICATION")
.beanRef("executionController", "processEvent", true) (3)
.beanRef("eventProcessor", "process", true)
.beanRef("messageTransformer", "transform", true)
.to("wmq:EVENT")
.beanRef("executionController", "updateSchedulerState", true); (4)
Kindly let me know how should i configure the route for the above use case.
Thanks,
Vaibhav
So you need to access this executionContext in your beans at various points in the route?
If I understand correctly, you can put this executionContext in an exchange Property, and it will persist throughout the route.
Setting the exchange property can be done via the Exchange.setProperty() method or various camel dsl functions such as like this:
from("direct:xyz)
.setProperty("awesome", constant("YES"))
//...
You can access exchange properties from a bean by adding a method argument of type Exchange, like this:
public class MyBean {
public void foo(Something something, Exchange exchange) {
if ("YES".equals(exchange.getProperty("awesome"))) {
// ...
}
}
}
Or via #Property like this:
public class MyBean {
public void foo(Something something, #Property String awesome) {
if ("YES".equals(awesome)) {
// ...
}
}
}
This presumes you are using later versions of camel.
Does this help?
Related
In the app I'm working on, I'm using Mediatr and its pipelines to handle database interaction, some minor business logic, validation, etc.
There's a few checks for things like access control I can handle in the pipeline, since I'm using a context object as described here https://jimmybogard.com/sharing-context-in-mediatr-pipelines/ to go from ASP.Net identity to a custom context object with user information and claims.
One problem I'm having is that since this application is multi-tenant, I need to ensure that even if an object exists, it belongs to that tenant, and the only way to be sure of that is to grab the object from the database and check it. It seems to me the validation shouldn't have side effects, so I don't want to rely on that to populate the context object. But then that pushes a bunch of validation down into the Mediatr handlers as they check for object existence, and so on, leading to a lot of repeated code. I don't really want to query the database multiple times since some queries can be expensive.
Another issue with doing the more complicated validation in the actual request handlers is getting what are essentially validation errors back out. Currently, if one of these checks fail I throw a ValidationException, which is then caught by middleware and turned into a ProblemDetails that's returned to the API caller. This is basically exceptions as flow control, and a validation failure really isn't "exceptional" anyhow.
The thoughts I'm having on how to solve this are:
Somewhere in the pipeline, when I'm building the context, include attempting to fetch the objects needed from the database. Validation then fails if any of these are null. This seems like it would make testing harder, as well as needing to decorate the requests somehow (or use reflection) so the pipeline can know to attempt to load these objects.
Have the queries in the validator, but use some sort of cache aware repository so when the same object is queried later, it's served from the cache, and not the database. The handlers would also use this cache aware repository (Currently the handlers interact directly with the EF Core DbContext to query). This then adds the issue of cache invalidation, which I'm going to have to handle at some point, anyhow (quite a few items are seldom modified). For testing, a dummy cache object can be injected that doesn't actually cache anything.
Make all the responses from requests implement an interface (or extend an abstract class) that has validation info, general success flags, etc. This can either be returned through the API directly, or have some pipeline that transforms failures into ProblemDetails. This would add some boilerplate to every response and handler, but avoids exceptions as flow control, and the caching/reflection issues in the other options.
Assume for 1 and 2 that any sort of race conditions are not an issue. Objects don't change owners, and things are seldom actually deleted from the database for auditing/accounting purposes.
I know there's no true one size fits all for problems like this, but I would like to know if there's additional options I'm missing, or any long term maintainability issues anyone with a similar pipeline has encountered if they went with one of these listed options.
We use MediatR IRequestPreProcessor for fetching data that we need both in RequestHandler and in FluentValidation validators.
RequestPreProcessor:
public interface IProductByIdBinder
{
int ProductId { get; }
ProductEntity Product { set; }
}
public class ProductByIdBinder<T> : IRequestPreProcessor<T> where T : IProductByIdBinder
{
private readonly IRepositoryReadAsync<ProductEntity> productRepository;
public ProductByIdBinder(IRepositoryReadAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public async Task Process(T request, CancellationToken cancellationToken)
{
request.Product = await productRepository.GetAsync(request.ProductId);
}
}
RequestHandler:
public class ProductDeleteCommand : IRequest, IProductByIdBinder
{
public ProductDeleteCommand(int id)
{
ProductId = id;
}
public int ProductId { get; }
public ProductEntity Product { get; set; }
private class ProductDeleteCommandHandler : IRequestHandler<ProductDeleteCommand>
{
private readonly IRepositoryAsync<ProductEntity> productRepository;
public ProductDeleteCommandHandler(
IRepositoryAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public Task<Unit> Handle(ProductDeleteCommand request, CancellationToken cancellationToken)
{
productRepository.Delete(request.Product);
return Unit.Task;
}
}
}
FluentValidation validator:
public class ProductDeleteCommandValidator : AbstractValidator<ProductDeleteCommand>
{
public ProductDeleteCommandValidator()
{
RuleFor(cmd => cmd)
.Must(cmd => cmd.Product != null)
.WithMessage(cmd => $"The product with id {cmd.ProductId} doesn't exist.");
}
}
I see nothing wrong with handling business logic validation in the handler layer.
Moreover, I do not think it is right to throw exceptions for them, as you said it is exceptions as flow control.
Introducing a cache seems like overkill for the use case too. The most reasonable option is the third IMHO.
Instead of implementing an interface you can use the nifty OneOf library and have something like
using HandlerResponse = OneOf<Success, NotFound, ValidationResponse>;
public class MediatorHandler : IRequestHandler<Command, HandlerResponse>
{
public async Task<HandlerResponse> Handle(
Command command,
CancellationToken cancellationToken)
{
Resource resource = await _userRepository
.GetResource(command.Id);
if (resource is null)
return new NotFound();
if (!resource.IsValid)
return new ValidationResponse(new ProblemDetails());
return new Success();
}
And then map it in your API Layer like
public async Task<IActionResult> PostAsync([FromBody] DummyRequest request)
{
HandlerResponse response = await _mediator.Send(
new Command(request.Id));
return response.Match<IActionResult>(
success => Created(),
notFound => NotFound(),
failed => new UnprocessableEntityResult(failed.ProblemDetails))
);
}
I have a very loosely couplet system that takes about any json payload and saves in a mongo colection.
There are no entities to expose as resouces, but only controller endpoints
eg.
#RequestMapping(method = RequestMethod.POST, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<Map<String, Object>> publish(#RequestBody Map<String, Object> jsonBody) {
.. save the body in mongo
....}
I still want to build a hypermedia driven app. with links for navigation and paging.
The controller therefor implements ResourceProcessor
public class PublicationController implements ResourceProcessor<RepositoryLinksResource> {
....
#Override
public RepositoryLinksResource process(RepositoryLinksResource resource) {
resource.add(linkTo(methodOn(PublicationController.class).getPublications()).withRel("publications"));
return resource;
}
The problem is that the processor never gets called ??
Putting #EnableWebMvc on a configuration class solves it (the processor gets called), but firstly that should not be necessary, and secondary the format of HAL links seems broken
eg. gets formattet as a list
links: [
{
"links":[
{
"rel":"self",
"href":"http://localhost:8080/api/publications/121212"
},
{
"rel":"findByStartTimeBetween",
"href":"http://localhost:8080/api/publications/search/findStartTimeBetween?timeStart=2015-04-10T13:44:56.437&timeEnd=2015-04-10T13:44:56.439"
}
]
}
Are there alternatives to #enableWebMvc so the processor gets called ?
Currently I'm running Spring boot v. 1.2.3
Well it turns out that the answer was quite simple.
The problem was that I had static content (resources/static/index.html)
This will suppress the hypermedia links from the root.
Moving the static content made everything thing work great.
I have a Website that contains a number of webpages and some WCF services.
I have a logging IHttpModule which subscribes to PreRequestHandlerExecute and sets a number of log4net MDC variables such as:
MDC.Set("path", HttpContext.Current.Request.Path);
string ip = HttpContext.Current.Request.ServerVariables["HTTP_X_FORWARDED_FOR"];
if(string.IsNullOrWhiteSpace(ip))
ip = HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"];
MDC.Set("ip", ip);
This module works well for my aspx pages.
To enable the module to work with WCF I have set aspNetCompatibilityEnabled="true" in the web.config and RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed on the service.
But when the service method is called the MDC no longer contains any of the set values. I have confirmed they are being set by putting a logging method in the PreRequestHandlerExecute.
I think the MDC is loosing the values because in the log I can see the PreRequestHandlerExecute handler method and service method calls are on separate
threads.
The post log4net using ThreadContext.Properties in wcf PerSession service suggests using log4net.GlobalContext but I think that solution would run into issues if two users hit the application at the same time as GlobalContext is shared by all threads.
Is there a way to make this work?
Rather than taking the values from the HttpContext and storing them in one of log4net's context objects, why not log the values directly from the HttpContext? See my answer to the linked question for some techniques that might work for you.
Capture username with log4net
If you go to the bottom of my answer, you will find what might be the best solution. Write an HttpContext value provider object that you can put in log4net's GlobalDiagnosticContext.
For example, you might do something like this (untested)
public class HttpContextValueProvider
{
private string name;
public HttpContextValueProvider(string name)
{
this.name = name.ToLower();
}
public override string ToString()
{
if (HttpContext.Current == null) return "";
var context = HttpContext.Current;
switch (name)
{
case "path":
return context.Request.Path;
case "user"
if (context.User != null && context.User.Identity.IsAuthenticated)
return context.User.Identity.Name;
case "ip":
string ip = context.Request.ServerVariables["HTTP_X_FORWARDED_FOR"];
if(string.IsNullOrWhiteSpace(ip))
ip = context.Request.ServerVariables["REMOTE_ADDR"];
return ip;
default:
return context.Items[name];
}
return "";
}
}
In the default clause I assume the name, if it is not a specifically case that we want to handle, represents a value in the HttpContext.Current.Items dictionary. You could make it more generic by also adding the ability to access Request.ServerVariables and/or other HttpContext information.
You would use this object like so:
Somewhere in your program/web site/service, add some instances of the object to log4net's global dictionary. When log4net resolves the value from the dictionary, it will call ToString before logging the value.
GDC.Set("path", new HttpContextValueProvider("path"));
GDC.Set("ip", new HttpContextValueProvider("ip"));
Note, you are using log4net's global dictionary, but the objects that you are putting in the dictionary are essentially wrappers around the HttpContext.Current object, so you will always be getting the information for the current request, even if you are handling simultaneous requests.
Good luck!
Here is a question on the Caching Proxy design pattern.
Is it possible to create with PHP a dynamic Proxy Caching implementation for automatically adding cache behaviour to any object?
Here is an example
class User
{
public function load($login)
{
// Load user from db
}
public function getBillingRecords()
{
// a very heavy request
}
public function computeStatistics()
{
// a very heavy computing
}
}
class Report
{
protected $_user = null;
public function __construct(User $user)
{
$this->_user = $user;
}
public function generate()
{
$billing = $this->_user->getBillingRecords();
$stats = $this->_user->computeStatistics();
/*
...
Some rendering, and additionnal processing code
...
*/
}
}
you will notice that report will use some heavy loaded methods from User.
Now I want to add a cache system.
Instead of designing a classic caching system, I just wonder if it is possible to implement a caching system in a proxy design pattern with this kind of usage:
<?php
$cache = new Cache(new Memcache(...));
// This line will create an object User (or from a child class of User ex: UserProxy)
// each call to a method specified in 3rd argument will use the configured cache system in 2
$user = ProxyCache::create("User", $cache, array('getBillingRecords', 'computeStatistics'));
$user->load('johndoe');
// user is an instance of User (or a child class) so the contract is respected
$report = new report($user)
$report->generate(); // long execution time
$report->generate(); // quick execution time (using cache)
$report->generate(); // quick execution time (using cache)
each call to a proxyfied method will run something like:
<?php
$key = $this->_getCacheKey();
if ($this->_cache->exists($key) == false)
{
$records = $this->_originalObject->getBillingRecords();
$this->_cache->save($key, $records);
}
return $this->_cache->get($key);
Do you think it is something we could do with PHP? do you know if it is a standard pattern? How would you implement it?
It would require to
implement dynamically a new child class of the original object
replace the specified original methods with the cached one
instanciate a new kind of this object
I think PHPUnit does something like this with the Mock system...
You can use the decorator pattern with delegation and create a cache decorator that accepts any object then delegates all calls after it runs it through the cache.
Does that make sense?
I've written an Xtext-based plugin for some language. I'm now interested in creating a new independent view (as a separate plugin, though it requires my first plugin), which will interact with the currently-active DSL document - and specifically, interact with the model Xtext parsed (I think it's called the Ecore model?). How do I approach this?
I saw I can get an instance of XtextEditor if I do something like this when initializing my view:
getSite().getPage().addPartListener(new MyListener());
And then, in MyListener, override partActivated and partInputChanged to get an IWorkbenchPartReference, which is a reference to the XtextEditor. But what do I do from here? Is this even the right approach to this problem? Should I instead use some notification functionality from the Xtext side?
Found it out! First, you need an actual document:
IXtextDocument doc = editor.getDocument();
Then, if you want to access the model:
doc.modify(new IUnitOfWork.Void<XtextResource>() { // Can also use just IUnitOfWork
#Override public void process(XtextResource state) throws Exception {
...
}
});
And if you want to get live updates whenever it changes:
doc.addModelListener(new IXtextModelListener() {
#Override public void modelChanged(XtextResource resource) {
for (EObject model : resource.getContent()) {
...
}
}
});