Where to put calls to 3rd party APIs in Apigility/ZF2? - api

I have just completed my first API in Apigility. Right now it is basically a gateway to a database, storing and retrieving multi-page documents uploaded through an app.
Now I want to run some processing on the documents, e.g. process them through a 3rd party API or modify the image quality etc., and return them to the app users.
Where (in which class) do I generally put such logic? My first reflex would be to implement such logic in the Resource-Classes. However I feel that they will become quite messy, obstructing a clear view on the API's interface in the code and creating a dependency on a foreign API. Also, I feel limited because each method corresponds to an API call.
What if there is a certain processing/computing time? Meaning I cannot direct respond the result through a GET request. I thought about running an asynchronous process and send a push notification to the app, once the processing is complete. But again, where in the could would I ideally implement such processing logic?
I would be very happy to receive some architectural advice from someone who is more seasoned in developing APIs. Thank you.

You are able to use the zf-rest resource events to connect listeners with your additional custom logic without polluting your resources.
These events are fired in the RestController class (for example a post.create event here on line 382).
When you use Apigility-Doctrine module you can also use the events triggered in the DoctrineResource class (for example the DoctrineResourceEvent::EVENT_CREATE_POST event here on line 361) to connect your listeners.
You can use a queueing service like ZendQueue or something (a third party module) built on top of ZendQueue for managing that. You can find different ZF2 queueing systems/modules using Google.
By injecting the queueing service into your listener you can simply push your jobs directly into your queue.

Related

How to subscribe to Topic in Dapr for .Net Core outside Controllers

I just started with Dapr a few days back and although I am able to publish and subscribe to events in Dapr, the way I am doing so is using the Topic method attribute on an action method within a controller such as this...
And while this works I prefer not to mix integration events with service API. This is the Swagger...
I get that the topic name is long but it's so I can ensure unique topics.
What I'd prefer is to position the Handler outside of any Controller. Something like this..
Is this even possible?
I derived a solution out of the .Net Dapr client routing sample.
For each event I add a MapPost endpoint similar to the following
The RequestDelegator receives the event for a given Topic, it will resolve the handler class from the interface and topic attributes and then invoke it's handle method, passing in the event data from Dapr.
My services follow the CQRS and EventSourcing patterns, as such integration events will rarely be the same shape as the command inputs. In my case, events are typically much lighter than a commands, consisting of mostly related Ids.

Akka.Net: Transparently Passing Along Contextual Information for Auditing/Authorization

Background
We have very strict auditing requirements and want to be able to correlate every action our system takes on behalf of the user to a specific authentication operation (sign-on). In addition to these strict auditing requirements, we also have some complex authorization requirements unsolvable by simple claims based authorization.
Considering both of these together led me to wondering the feasibility of using an 'envelope' type design, where messages sent stemming from a user request are wrapped in an envelope containing the necessary information, such as their auth token and info about the sending machine. Now, it would be fairly simple enough to add a property for this token to every message, but that seems tacky and since its a rather cross-cutting concern, I would rather it not pollute every protocol in the system, which is why I'm thinking the envelope idea is worth considering. This approach would also require the cooperation of every actor in the system and my goal is to have this be transparent to actors who don't need any of this information, but also make the information available in case actors do need it. In the case of actors needing it, it's also OK if they just accept the envelope type directly.
Imagined Solution
Overview
Wrap each Tell operation in an envelope used to transport required contextual information
Perhaps implemented w/ a custom actor ref provider and actor ref wrapping the ones configured
Unwrap envelope, if exists, on each receive operation
Custom mailbox
Would also handle sending a message to the auditing service
How to make the contextual information available to the actor?
Can we add to the actor's Context object somehow?
Also acceptable for actor to accept the envelope type/not use custom mailbox in this case
Discussion
In order to make this all transparent, my initial thinking is to 'intercept' the send/receive operations. I understand enough akka.net to implement a custom mailbox and I think this would be the way to go for this kind of approach, but I'm open to other ideas. The mailbox would perform the unwrap and make the contextual information available to the actor in case it's required (99% of the time it's not, likely better to just accept the envelope directly when it is required to be explicit). The mailbox would also handle fulfilling the auditing requirement by sending a message to the auditing service w/ the required information, which not only includes the contextual information from the request, but also local machine information to know where/who did the processing.
The part I'm second guessing myself on is intercepting the send operation (Tell). Since IActorRef instances are created via a configured IActorRefProvider and since this guy handles the Tell operation (via it's created IActorRef instances), I think it makes sense to write a custom IActorRefProvider and a custom IActorRef. Both would wrap the implementations that are configured (decorator pattern), and the custom IActorRef would provide the required behavior in it's Tell method. For webapi apps (only entry point for users), it would pull the required contextual information from HttpContext (one custom ref provider) and for backend apps (another custom ref provider), it would pull the required contextual information from the current message's context. I'm not sure how to add data to the actor's Context property, but I'm assuming it is possible.
With these two pieces in place, the contextual information would effectively be passed along, from actor to actor, and service to service. So even if a message is 20 actors down the line, if it was initially instigated by a user via the REST API, it would still have that contextual information, thereby allowing a full and complete audit and tracing of each action our system takes back to a specific sign on.
What I'm Hoping For
The primary thing I'm hoping for with this post is validation that this is a reasonable approach to take, and if not, why not and alternate suggestions for achieving the desired behavior. Also very welcomed would be any code samples for custom mailboxes/actor ref/actor ref providers and extra cookies if they're doing something similar to what I'm trying to accomplish here. Another welcomed tidbit is how to do the mailbox configuration so I don't need to manually update all of my Props with the custom mailbox implementation. Akka.net configuration is definitely a weaker point of mine, particularly the deployments section, so any core knowledge/articles/advice here is greatly appreciated!
Thanks for taking the time to read this! Any and all help is much appreciated!
Other StackOverflow Issues:
The answers provided in these issues require the cooperation of every actor. Ideally this is all transparent and actors that don't need to use this contextual information can be written as if it didn't exist.
Passing Contextual Information
How to elegantly log contextual information along with every message
There were a couple others I viewed [can't find them right now for some reason], but they all either required cooperation or global shared state [isn't that what akka avoids? :p]
Phobos, a proprietary observability library for Akka.NET, wraps all messages inside a distributed tracing context - which can be aggregated back together again in an off-the-shelf tracing system that supports OpenTracing, such as Jaeger / Zipkin / Azure Application Insights.
You can append custom data to each of the traces that are captured inside your actors via the Context.GetInstrumentation() method inside any of your actors' - custom data can include tags that might include a unique userId, a transaction Id, and so on. That's all part of the OpenTracing specification.
Disclosure: I work for Petabridge, the makers of Phobos. It's proprietary and costs money to use, but it's purpose built to offer this type of decentralized, but complete tracing out of the box.
Alternatively, if you didn't want to use Phobos you might be able to accomplish this using a custom messaging protocol for context propagation and structured logging with the Akka.Logger.Serilog library.

Can Webapi be used in an application which is not excessed by any external application?

I'd read it somewhere that whenever one needs to do data intensive work then Webapi could be used. Ex: autocomplete textbox where we get data from using ajax on key press.
Now someone told me that Webapi shouldn't be used within applications which are not externally accessed. Rather action should be used to the same work as it is capable of returning the data back in a similar fashion to webapi.
I'd like to know your suggestions over it.
Depends on how you look at it. If all you need is ajax-ification of your controller actions, then you really don't need Web-API. Your actions can return a JsonResult and it is very easy to consume that from your client side through an AJAX call.
Web-API makes it easy for you to expose you actions to external clients. It supports HTTP protocol and Json and XML payloads automatically, out of the box, without you writing the code for it. Now, there is nothing preventing you from consuming the same Web-API actions from your own internal clients in an AJAX manner.
So the answer to your question depends on your design. If you don't have external clients, then there is no string need for you to have Web-API. Your standard controller actions can do the job.

Windows Workflow 4.5 Paradigm Questions

I've been digging into the technical details and implementation of Windows Workflow 4.5 as a beginner and having decent results. My question is more of a "why and when" vs. a "how to" question so bear with me.
I've taken a familiar concept to us all and abstracted the business logic into WF, namely the universal log on process. What I wanted to accomplish is having reusable logic that I can call from an MVC website, a Windows Forms application, etc. and have everything run through the same workflow and I have achieved that.
Now I have 2 conceptual questions as to "when" to apply WF and when to use code.
1 - Take simple validation as an example. I'm trying to login but I've passed an empty user name or password string. Obviously, I want to send a message back to the end-user "UserName Required" and "Password Required", which I've done. Now, the way that I did that is I have a validation class (FluentValidation NuGet package if it matters) but the important thing is I'm doing this in code. So, in WF I call my validation code via an ExecuteMethod and everything works just fine. My question is: Is this the wrong approach with a WF mindset? Should I be doing inline WF "If" Actions/Decisions and building up the validation messages inside of WF directly versus calling out to some chunk of code? I'm asking not just for validation but as a concept we can all relate to but more generally should I be attempting to put anything and everything I can into WF itself or is it better to call custom code? I'm looking more for best practice with reasoning from seasoned Software Architects with WF experience versus someone's opinion if possible.
2 - Picking up a workflow on another machine. So, part of the same login workflow activity requires a service method call. I've written the code and workflow in such a way that the workflow receives an In parameter of ILogOnService which has an interface method "AuthenticateUser". The concrete implementation I'm passing in calls out to an MVC4 Web Api post method, in async, to do a standard Asp.Net membership ValidateUser. Again, should I be calling this Web Api PostAsync from inside the WF workflow? If so, doesn't that tightly-couple my workflow to Asp.Net Membership and my particular service choice. It seems there are ways to get the workflow to a certain point and then resume the process on another machine, e.g. where a service is running, and continue the process but I'm not able to find good examples of attempting that.
Just looking for some guidelines and ideas from the pros at this technology but I will pick the most informative answer.
There is nothing wrong with using C# code to implement details of a workflow. In fact I always tell people that if they are using WF4 with just the standard out of the box activities they are probably doing things wrong. You really need to be creating, or have someone else do it for your, custom activities that model business activities for your business. Now if that means creating an activity that validates a login using the FluentValidation that is perfectly fine. Another time you might build a higher level business activity out of lower level WF4 activities, just combine them as works best in your case.
Calling a service with something like PostAsync can work well if you know the action is short lived and is normally available. However when you get into SOA styles you really want to start using temporal decoupling so one service is not dependent on another service being available right away. And when you get into temporal decoupling you really want to be using queues, maybe MSMQ or maybe another similar technology. So in that cas you really want to send a one way message with a response queue and have to workflow go idle and wait for the response message to arrive. This would reload the workfloe, possibly on another machine. Now that might not always be appropriate, for example in your login it would not be much use to grant the login a day later because the membership service was unavailable, but can result in very scalable and fault tolerant systems. Of course there is no free lunch as these systems are very hard to design properly.

PRISM and WCF - Do they play nice?

Ok,
this is a more general "ugly critters in the corner" question. I am planning to start a project on WCF and PRISM. I have been playing around with PRISM some time know, and must say, I like it. Solid foundation for applications with nice possibilities to grow.
Now I want to incorporate WCF and build a distributed application, with one part on a server and two on the clients. It could be even the same machine, or not, depending on the scenario.
My idea is now to take the event concept from PRISM and extend it "over the wire" using WCF and callbacks, like described here WCF AlarmClock Callback Example.
I created a small picture to illustrate the idea (mainly for me), perhaps this makes things a little more clear:
The grey arrows stand for "using lib". The WCF-Event-Base meaning normal PRISM events, where the publish method is called "over the wire".
There are a few questions which come to mind:
Are there any existing known examples for such things?
What will be the best way to "raise" events over the wire?
Any possible problems with this concept (the ugly critters mentioned earlier)
Regarding the second question, I currently think about raising the events using a string (the type of the concrete event I want to raise) and the payload as argument. Something like public void RaiseEvent(string eventType, object eventPayload){} The payload needs to be serializeable, perhaps I even include a hashcheck. (Meaning if I raise e.g. an event with a picture as argument 10 times, I only transfer the picture once, afterwards using the hash to let the server use the buffer when publish)...
Ok, I think you get the idea. This "thing" should behave like a giant single application, using a kind of WCF_EventAggregator instead of the normal PRISM IEventAggregator. (wow, while writing I just got the idea to "simply" extend the IEventAggregator, have to think about this)...
Why do I write this? Well, for feedback mainly, and to sort my thoughts. So comments welcome, perhaps anything I should be "careful" about?
Chris
[EDITS]
Client distribution
There should be an undefined number of client, the server should not be aware of clients. The server itself can be a client to itself, raising strongly typed PRISM events in other parts of the source code.
The main difference between a "client" and a "server" is the actual implementation of the WCF_PRISM connector, see next chapter...
Client Event raising (PRISM feature)
In PRISM, to raise simple events you do NOT even need a reference to a service interface. The IEventAggregator can be obtained via dependency injection, providing an instance of the desired event (e.g. WeatherChangedEvent). This event can be raised by simply calling eventInstance.Publish(23) because the event is implemented as public class WeatherChangedEvent : CompositePresentationEvent<int>
WCF - PRISM Connector
As simple as raising events is subscribing to events. Every module can subsribe to events using the same technique, obtaining a reference and using Subscribe to attach to this event.
Here is now where the "magic" should happen. The clients will include a prism module responsible for connecting PRISM events to "wcf message sends". It will basically subsribe to all available events in the solution (they are all defined in the infrastructure module anyway) and send out a WCF message in case an event is raised.
The difference between a SERVER and a CLIENT is the implementation of this module. There needs to be a slight difference because of two things.
The WCF setup settings
The flow of events to prevent an infinite loop
The event flow will be (example)
Client obtain ref to WeatherChangedEvent
wChanged.Publish(27) --> normal PRISM event raising
WCF_PRISM module is subscribed to event and
send this event to the server
Server internally gets instance of WeatherChangedEvent and publishes
Server calls back to all clients raising their WeatherChangedEvent
Open Points
The obvious point is preventing a loop. If the server would raise the event in ALL clients, the clients would call back to the server, raising the event again, and so on... So there needs to be a difference between an event caused locally (which means I have to send it to the server) and a "server caused event" which means I do not have to send it to the server.
Also, if a client has initiated the event itself, it does not need to be called by the server, because the event has already be raised (in the client itself, point 2).
All this special behaviour will be encapsulated in the WCF event raiser module, invisible from the rest of the app. I have to think about "how to know if event already published", perhaps a GUID or something like this would be a good idea.
And now the second big question, what was I was aiming at when telling about "strings" earlier. I do not want to write a new service interface definition every time I add an event. Most events in PRISM are defined by one line, especially during development I do not want to update the WCF_Event_Raising_Module each time I add an event.
I thought about sending the events directly when calling WCF, e.g. using a function with a signature like:
public void RaiseEvent(EventBase e, object[] args)
The problem is, I do not really know if I can serialize PRISM events that easy. They all derive from EventBase, but I have to check this... For that reason, I had the idea to use the type (as string), because I know the server shares the infrastructure module and can obtain its own instance of the event (no need to send it over the wire, only the arg)
So far till here, I will keep the question open for more feedback. Main new "insight" I just got: Have to think about the recursion / infite loop problem.
Btw. if anybody is completely confused by all this event talk, give PRISM a try. You will love it, even if you only use DI and Events (RegionManager e.g. is not my favorite)
Chris
[END EDIT 1]
This is a very interesting approach. I would say only two things here:
You are really asking for trouble if you use strings and object parameters. Strongly typed EventAggregator events (inheriting from CompositeEvent) are the way to go here. The maintainability will go way up if you do this.
Your model for your WCF -> EventAggregator should consider everything to and from the EventAggregator as an "event" and everything to/from the WCF services as "messages". What you should really consider is that you are essentially translating a EventAggregator event to a message, rather than asking the question "how do I raise WCF events".
I think what you are doing is feasible. Looking at your implementation I really like how you are thinking about it.
Slight Alternative (w/ strong typing)
I wanted to throw a little something out there and see what you thought about it... maybe it will influence your design slightly. Specifically this is meant to address my first point above and go even further with the strong-typing.
Have you considered having EventAggregator-backed implementations of your service interface? Let's say in your example you have an IWeatherService WCF service that you are working with. Currently, as I understand it, your usage will look something like this:
Client uses the WCF Event Client library and calls RaiseEvent("ChangeWeather", Weather.Sunny);
The WCF Event Client library translates this into the appropriate call to the WCF service waiting to receive this message, using the IWeatherService channel interface to do so. Probably with a big nasty switch statement based on the name of the method call.
Why not modify this slightly. Make IWeatherService a shared contract among all of the servers and clients. The servers will have the actual implementation, obviously, but the clients will have EventAggregator-backed implementations that go to a central broker that queues and sends messages to servers.
Write an EventAggregator-backed implementation of the IWeatherService that raises events to be received by a central message broker and throw that implementation in your container for clients to use.
public ClientWeatherService : IWeatherService
{
IEventAggregator _aggregator;
public ClientWeatherService(IEventAggregator aggregator)
{
_aggregator = aggregator;
}
public void ChangeWeather(Weather weather)
{
ChangeWeatherEvent cwEvent = _aggregator.GetEvent<ChangeWeatherEvent>();
cwEvent.Publish(weather);
}
}
From there, instead of using your "WCF Event Client Library" directly, they use the IWeatherService directly, not knowing that it doesn't call the actual service.
public MyWeatherViewModel : ViewModel
{
IWeatherService _weatherService;
public MyWeatherViewModel(IWeatherService weatherService)
{
_weatherService = weatherService;
}
}
Then, you'd have some event handler setup to make the WCF calls to the real service, but now you have the benefit of strong-typing from the clients.
Just a thought.
I really like this type of question. I wish more people would ask this kind of thing on Stackoverflow. Gets the brain moving in the morning :)
It seems like a complicated approach to the problem.
Are you raising the event from the Client application, or raising the events from the service using the callback contract? or both?
I would approach this with a simple service class in the client. It can implement the Callback contract, and for each callback method it can just raise a Prism event locally to any subscribers in the client. If you need to raise events that are handled by the service, then the service class can subscribe to those events and call the wcf service.
All you need really is a class that abstracts the details of the wcf service away from the client, and exposes it's interface through Prism events.
I personally wouldn't want to modify / extend the infrastructure component and create a dependency on the concrete wcf service.