I have just completed my first API in Apigility. Right now it is basically a gateway to a database, storing and retrieving multi-page documents uploaded through an app.
Now I want to run some processing on the documents, e.g. process them through a 3rd party API or modify the image quality etc., and return them to the app users.
Where (in which class) do I generally put such logic? My first reflex would be to implement such logic in the Resource-Classes. However I feel that they will become quite messy, obstructing a clear view on the API's interface in the code and creating a dependency on a foreign API. Also, I feel limited because each method corresponds to an API call.
What if there is a certain processing/computing time? Meaning I cannot direct respond the result through a GET request. I thought about running an asynchronous process and send a push notification to the app, once the processing is complete. But again, where in the could would I ideally implement such processing logic?
I would be very happy to receive some architectural advice from someone who is more seasoned in developing APIs. Thank you.
You are able to use the zf-rest resource events to connect listeners with your additional custom logic without polluting your resources.
These events are fired in the RestController class (for example a post.create event here on line 382).
When you use Apigility-Doctrine module you can also use the events triggered in the DoctrineResource class (for example the DoctrineResourceEvent::EVENT_CREATE_POST event here on line 361) to connect your listeners.
You can use a queueing service like ZendQueue or something (a third party module) built on top of ZendQueue for managing that. You can find different ZF2 queueing systems/modules using Google.
By injecting the queueing service into your listener you can simply push your jobs directly into your queue.
i need to implement a windows service that connects to EMC's Documentum and receives an event every time a document is loaded.
The event should contain the reference to the document itself.
Is there a way to do it via API or do i have to do polling using a web service?
Quickest would be to implement this via polling.
Your Windows service can either
access DFS exposed service (that you need implement on DCTM side)
access docbase directly using DFC/.NET
But the question here is what is that you want to check?
Document loaded - If you are referring to dm_document object created (e.g. by a user/system or some sort of upload functionality) - you will need to register dm_audittrail for that event. Once that is in place your service or API call can check for dm_audittrail entries.
Alternatively you could use Documentum BOF (Business Object Framework) to write custom code that would be triggered every time for instance new document is being crated (or updated) - i.e. on specific predefined event.
This custom code could do whatever you like, like for instance broadcast JMS message to a queue that your Windows Service is listening. You see to implement the thing that you want (event based notification) you need some communication channel between your application and a content server.
Or simply just poll Docbase it every x seconds.
Guess you already know this but a lot of info can be found on:
https://community.emc.com/community/edn
Also BOF Guide (older version): https://developer-content.emc.com/developer/downloads/BusinessObjectsDevelopersGuide.pdf
I think you can use Rest Service. Documentum whole functionality exposed in rest service. https://community.emc.com/community/labs/archivedprojects/dctm_rest
I'd read it somewhere that whenever one needs to do data intensive work then Webapi could be used. Ex: autocomplete textbox where we get data from using ajax on key press.
Now someone told me that Webapi shouldn't be used within applications which are not externally accessed. Rather action should be used to the same work as it is capable of returning the data back in a similar fashion to webapi.
I'd like to know your suggestions over it.
Depends on how you look at it. If all you need is ajax-ification of your controller actions, then you really don't need Web-API. Your actions can return a JsonResult and it is very easy to consume that from your client side through an AJAX call.
Web-API makes it easy for you to expose you actions to external clients. It supports HTTP protocol and Json and XML payloads automatically, out of the box, without you writing the code for it. Now, there is nothing preventing you from consuming the same Web-API actions from your own internal clients in an AJAX manner.
So the answer to your question depends on your design. If you don't have external clients, then there is no string need for you to have Web-API. Your standard controller actions can do the job.
We're currently working on a Worklight project using Dojo (more specifically dojox/app). We managed to create a basic example with a store, model, controller and a view. However, now we want to connect this to our Worklight adapter.
What is the best approach in connecting a Dojox/app application to the backend? We were thinking about feeding our store with the data from the Worklight adapter, however, we need to do all CRUD operations and our data should be in sync with the server because multiple users might be working at the same item.
The best general solution I can think about is using a JsonRest store, but we're using the WL.Client.invokeProcedure function that calls our adapter, so we're not directly using the service.
We found a solution by using the WL.JSONStore from WorkLight. The API of it isn't compatible with the dojo/store API (logically since it wasn't meant to be), but we wrote a dojo/store API based proxy class which does nothing more than translating and forwarding calls to the WL.JSONStore.
Ok,
this is a more general "ugly critters in the corner" question. I am planning to start a project on WCF and PRISM. I have been playing around with PRISM some time know, and must say, I like it. Solid foundation for applications with nice possibilities to grow.
Now I want to incorporate WCF and build a distributed application, with one part on a server and two on the clients. It could be even the same machine, or not, depending on the scenario.
My idea is now to take the event concept from PRISM and extend it "over the wire" using WCF and callbacks, like described here WCF AlarmClock Callback Example.
I created a small picture to illustrate the idea (mainly for me), perhaps this makes things a little more clear:
The grey arrows stand for "using lib". The WCF-Event-Base meaning normal PRISM events, where the publish method is called "over the wire".
There are a few questions which come to mind:
Are there any existing known examples for such things?
What will be the best way to "raise" events over the wire?
Any possible problems with this concept (the ugly critters mentioned earlier)
Regarding the second question, I currently think about raising the events using a string (the type of the concrete event I want to raise) and the payload as argument. Something like public void RaiseEvent(string eventType, object eventPayload){} The payload needs to be serializeable, perhaps I even include a hashcheck. (Meaning if I raise e.g. an event with a picture as argument 10 times, I only transfer the picture once, afterwards using the hash to let the server use the buffer when publish)...
Ok, I think you get the idea. This "thing" should behave like a giant single application, using a kind of WCF_EventAggregator instead of the normal PRISM IEventAggregator. (wow, while writing I just got the idea to "simply" extend the IEventAggregator, have to think about this)...
Why do I write this? Well, for feedback mainly, and to sort my thoughts. So comments welcome, perhaps anything I should be "careful" about?
Chris
[EDITS]
Client distribution
There should be an undefined number of client, the server should not be aware of clients. The server itself can be a client to itself, raising strongly typed PRISM events in other parts of the source code.
The main difference between a "client" and a "server" is the actual implementation of the WCF_PRISM connector, see next chapter...
Client Event raising (PRISM feature)
In PRISM, to raise simple events you do NOT even need a reference to a service interface. The IEventAggregator can be obtained via dependency injection, providing an instance of the desired event (e.g. WeatherChangedEvent). This event can be raised by simply calling eventInstance.Publish(23) because the event is implemented as public class WeatherChangedEvent : CompositePresentationEvent<int>
WCF - PRISM Connector
As simple as raising events is subscribing to events. Every module can subsribe to events using the same technique, obtaining a reference and using Subscribe to attach to this event.
Here is now where the "magic" should happen. The clients will include a prism module responsible for connecting PRISM events to "wcf message sends". It will basically subsribe to all available events in the solution (they are all defined in the infrastructure module anyway) and send out a WCF message in case an event is raised.
The difference between a SERVER and a CLIENT is the implementation of this module. There needs to be a slight difference because of two things.
The WCF setup settings
The flow of events to prevent an infinite loop
The event flow will be (example)
Client obtain ref to WeatherChangedEvent
wChanged.Publish(27) --> normal PRISM event raising
WCF_PRISM module is subscribed to event and
send this event to the server
Server internally gets instance of WeatherChangedEvent and publishes
Server calls back to all clients raising their WeatherChangedEvent
Open Points
The obvious point is preventing a loop. If the server would raise the event in ALL clients, the clients would call back to the server, raising the event again, and so on... So there needs to be a difference between an event caused locally (which means I have to send it to the server) and a "server caused event" which means I do not have to send it to the server.
Also, if a client has initiated the event itself, it does not need to be called by the server, because the event has already be raised (in the client itself, point 2).
All this special behaviour will be encapsulated in the WCF event raiser module, invisible from the rest of the app. I have to think about "how to know if event already published", perhaps a GUID or something like this would be a good idea.
And now the second big question, what was I was aiming at when telling about "strings" earlier. I do not want to write a new service interface definition every time I add an event. Most events in PRISM are defined by one line, especially during development I do not want to update the WCF_Event_Raising_Module each time I add an event.
I thought about sending the events directly when calling WCF, e.g. using a function with a signature like:
public void RaiseEvent(EventBase e, object[] args)
The problem is, I do not really know if I can serialize PRISM events that easy. They all derive from EventBase, but I have to check this... For that reason, I had the idea to use the type (as string), because I know the server shares the infrastructure module and can obtain its own instance of the event (no need to send it over the wire, only the arg)
So far till here, I will keep the question open for more feedback. Main new "insight" I just got: Have to think about the recursion / infite loop problem.
Btw. if anybody is completely confused by all this event talk, give PRISM a try. You will love it, even if you only use DI and Events (RegionManager e.g. is not my favorite)
Chris
[END EDIT 1]
This is a very interesting approach. I would say only two things here:
You are really asking for trouble if you use strings and object parameters. Strongly typed EventAggregator events (inheriting from CompositeEvent) are the way to go here. The maintainability will go way up if you do this.
Your model for your WCF -> EventAggregator should consider everything to and from the EventAggregator as an "event" and everything to/from the WCF services as "messages". What you should really consider is that you are essentially translating a EventAggregator event to a message, rather than asking the question "how do I raise WCF events".
I think what you are doing is feasible. Looking at your implementation I really like how you are thinking about it.
Slight Alternative (w/ strong typing)
I wanted to throw a little something out there and see what you thought about it... maybe it will influence your design slightly. Specifically this is meant to address my first point above and go even further with the strong-typing.
Have you considered having EventAggregator-backed implementations of your service interface? Let's say in your example you have an IWeatherService WCF service that you are working with. Currently, as I understand it, your usage will look something like this:
Client uses the WCF Event Client library and calls RaiseEvent("ChangeWeather", Weather.Sunny);
The WCF Event Client library translates this into the appropriate call to the WCF service waiting to receive this message, using the IWeatherService channel interface to do so. Probably with a big nasty switch statement based on the name of the method call.
Why not modify this slightly. Make IWeatherService a shared contract among all of the servers and clients. The servers will have the actual implementation, obviously, but the clients will have EventAggregator-backed implementations that go to a central broker that queues and sends messages to servers.
Write an EventAggregator-backed implementation of the IWeatherService that raises events to be received by a central message broker and throw that implementation in your container for clients to use.
public ClientWeatherService : IWeatherService
{
IEventAggregator _aggregator;
public ClientWeatherService(IEventAggregator aggregator)
{
_aggregator = aggregator;
}
public void ChangeWeather(Weather weather)
{
ChangeWeatherEvent cwEvent = _aggregator.GetEvent<ChangeWeatherEvent>();
cwEvent.Publish(weather);
}
}
From there, instead of using your "WCF Event Client Library" directly, they use the IWeatherService directly, not knowing that it doesn't call the actual service.
public MyWeatherViewModel : ViewModel
{
IWeatherService _weatherService;
public MyWeatherViewModel(IWeatherService weatherService)
{
_weatherService = weatherService;
}
}
Then, you'd have some event handler setup to make the WCF calls to the real service, but now you have the benefit of strong-typing from the clients.
Just a thought.
I really like this type of question. I wish more people would ask this kind of thing on Stackoverflow. Gets the brain moving in the morning :)
It seems like a complicated approach to the problem.
Are you raising the event from the Client application, or raising the events from the service using the callback contract? or both?
I would approach this with a simple service class in the client. It can implement the Callback contract, and for each callback method it can just raise a Prism event locally to any subscribers in the client. If you need to raise events that are handled by the service, then the service class can subscribe to those events and call the wcf service.
All you need really is a class that abstracts the details of the wcf service away from the client, and exposes it's interface through Prism events.
I personally wouldn't want to modify / extend the infrastructure component and create a dependency on the concrete wcf service.