I have developed a web application based on Restlet API. As I am adding more features over time, I need sometimes to reuse similar group of REST API under different endpoints, which provides slightly different context of execution (like switching different instances of databases with same schema). I like to refactor my code to make the API reusable and reuse them at different endpoints. My initial thinking was to design an Application for each reusable API and attach them on the router:
router.attach("/context1",APIApplication.class)
router.attach("/foo/context2",APIApplication.class)
The API should be agnostic of configuration of the REST API. What is the best way to pass context information (for example the instance of database) to the Application API? Is this approach viable and correct? What are the best practices to reuse REST API in Restlet? Some code samples would be appreciated to illustrate your answer.
Thanks for your help.
I have see this basic set-up running using a Component as the top level object, attaching the sub applications to the VirtualHost rather than a router, as per this skeleton sample.
public class Component extends org.restlet.Component
{
public Component() throws Exception
{
super();
// Client protocols
getClients().add(Protocol.HTTP);
// Database connection
final DataSource dataSource = InitialContext.doLookup("java:ds");
final Configuration configuration = new Configuration(dataSource);
final VirtualHost host = getDefaultHost();
// Portal modules
host.attach("/path1", new FirstApplication());
host.attach("/path2", new SecondApplication(configuration));
host.attach("/path3", new ThirdApplication());
host.attachDefault(new DefaultApplication(configuration));
}
}
We used a custom Configuration object basically a pojo to pass any common config information where required, and used this to construct the Applications, we used separate 'default' Contexts for each Application.
This was coded originally against restlet 1.1.x and has been upgraded to 2.1.x via 2.0.x, and although it works and is reasonably neat there may possibly be an even better way to do it in either versions 2.1.x or 2.2.x.
Related
Can someone tell me or give a ready-made CRUD example using WebFlux, RScoket and Spring (or SpringBoot)?
I studied the RSocket documentation, WebFlux, also wrote my simple examples, but I would like to see a real CRUD application using basic methods using RSocket.
I'll be very grateful.
Thanks.
I have maintained a Spring/RSocket sample project with 4 basic interaction modes of RSocket.
If you only require request/reply case for simple CRUD operations, check the request and response mode, and select a transport protocol, TCP or WebSocket.
To implement CRUD operations, just define 4 different routes for them, like define the RESTful APIs using URI, you have to have a good plan for the naming, but in RSocket there are no HTTP methods to help you to differentiate the same routes.
For example, in the server side, we can declare a #Controller to handling messages like this.
#Controller
class ProfileController {
#MessageMapping("fetch.profile.{name}")
public Mono<Profile> greet(#DestinationVariable String name) {
}
#MessageMapping("create.profile")
public Mono<Message> greet(#Payload CreateProfileRequest p) {
}
#MessageMapping("update.profile.{name}")
public Mono<Message> greet(#DestinationVariable String name, #Payload UpdateProfileRequest p) {
}
#MessageMapping("delete.profile.{name}")
public Mono<Message> greet(#DestinationVariable String name) {
}
}
In the client side, if it is a Spring Boot application, you can use RSocket RSocketRequester to interact with the server side like this.
//fetch a profile by name
requester.route("fetch.profile.hantsy").retrieveMono()
//create a new profile
requester.data(new CreateProfileRequest(...)).route("create.profile").retrieveMono()
//update the existing profile
requester.data(new UpdateProfileRequest(...)).route("update.profile.hantsy").retrieveMono()
//delete a profile
requester.route("delete.profile.hantsy").retrieveMono()
Of course, if you just build a service exposed by rsocket protocol, the client can be a rsocket-js project or other languages and frameworks, such as Angular, React or Android etc.
Update: I've added a crud sample in my rsocket sample codes, and I have published a post on Medium.
I have created a demo microservices application implemented with the help of Azure Function Apps. For separation of concerns, I have created an API Layer, Business Layer, and a Data Layer.
The API layer, being the function app, calls the business layer which implements the business logic while the data layer implements logic for storing and retrieving data.
After considerable thought, I have decided to use query-based API versioning for my demo.
The question I have is,
What is the best way to organize my code to facilitate this? Is there any other way to organize my code to accommodate the different versions apart from using different namespaces / repos?
As of now, I've created separate namespaces for each version but this has created a lot of code duplication. Also after getting it reviewed by some of my friends, they raised the concern that If separate namespaces are being used I would be forcing legacy systems to change references to the new namespace if they need to update which is not recommended.
Any help would be appreciated.
The simplest way to implement versioning in Azure Functions is using endpoints. The HttpTrigger Attribute allows the definition of a custom route where you can set the expected version.
// Version 1 of get users
[FunctionName(nameof(V1UserList))]
public static IEnumerable<UserModel> V1UserList(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = "v1/users")]HttpRequest req, ILogger log)
{
}
// Version 2 of get users
[FunctionName(nameof(V2UserList))]
public static IEnumerable<UserModel> V2UserList(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = "v2/users")]HttpRequest req, ILogger log)
{
}
When deploying each version in isolation a router component is required to redirect requests to the correct API endpoint.
The router component can be implemented in Azure using different services, such as:
Azure Function Proxies : you can specify endpoints on your function app that are implemented by another resource. You can use these proxies to break a large API into multiple function apps (as in a microservice architecture), while still presenting a single API surface for clients.
API Management :Azure API Management supports importing Azure Function Apps as new APIs or appending them to existing APIs. The process automatically generates a host key in the Azure Function App, which is then assigned to a named value in Azure API Management.
Sample code for Versioning APIs in Azure Functions
There are an option in ServiceStack to add routes dynamically, using IAppHost.Routes.Add. This is quite handy as it allows reusable services to be packed in a form of plugin and attached to actual applications.
To illustrate, here's an excerpt of application host configuration that enables a plugin and attaches routes:
class AppHost: AppHostHttpListenerBase {
public override void Configure(Container container) {
// other stuff
Plugins.Add(new MyReusablePlugin());
Routes.Add(typeof(string), "/stuff", "GET");
Routes.Add(typeof(string), "/other-stuff", "POST");
}
}
The problem is, I can't find a way to specify that authentication is required for those dynamically added routes. As of ServiceStack 4.0.15 there are no overload to Routes.Add that allow specifying that those routes require authentication. Is it possible (maybe in newer versions)?
Ideally, such a mechanism should support specifying roles/permissions, just like RequiredRoleAttribue or RequiresAnyRoleAttribute.
One more thing: I'm aware of Global request filters and it looks like they are a bit too global to be a valid solution here, but I might be wrong. If it's possible to achieve the same result (including ability to specify required roles/permissions) using request filters I would be glad to accept an example of such apporach as an answer.
You can do this using the AddAttributes extension method provided on Type:
typeof(YourType).AddAttributes(new RequiredRoleAttribute("Admin"));
So you would need to do this in addition to the Routes.Add method.
Long story as brief as possible...
I have an existing application that I'm trying to get ServiceStack into to create our new API. This app is currently an MVC3 app and uses the UnitOfWork pattern using Attribute Injection on MVC routes to create/finalize a transaction where the attribute is applied.
Trying to accomplish something similar using ServiceStack
This gist
shows the relevant ServiceStack configuration settings. What I am curious about is the global request/response filters -- these will create a new unit of work for each request and close it before sending the response to the client (there is a check in there so if an error occurs writing to the db, we return an appropriate response to the client, and not a false "success" message)
My questions are:
Is this a good idea or not, or is there a better way to do
this with ServiceStack.
In the MVC site we only create a new unit
of work on an action that will add/update/delete data - should we do
something similar here or is it fine to create a transaction only to retrieve data?
As mentioned in ServiceStack's IOC wiki the Funq IOC registers dependencies as a singleton by default. So to register it with RequestScope you need to specify it as done here:
container.RegisterAutoWiredAs<NHibernateUnitOfWork, IUnitOfWork()
.ReusedWithin(ReuseScope.Request);
Although this is not likely what you want as it registers as a singleton, i.e. the same instance returned for every request:
container.Register<ISession>((c) => {
var uow = (INHibernateUnitOfWork) c.Resolve<IUnitOfWork>();
return uow.Session;
});
You probably want to make this:
.ReusedWithin(ReuseScope.Request); //per request
.ReusedWithin(ReuseScope.None); //Executed each time its injected
Using a RequestScope also works for Global Request/Response filters which will get the same instance as used in the Service.
1) Whether you are using ServiceStack, MVC, WCF, Nancy, or any other web framework, the most common method to use is the session-per-request pattern. In web terms, this means creating a new unit of work in the beginning of the request and disposing of the unit of work at the end of the request. Almost all web frameworks have hooks for these events.
Resources:
https://stackoverflow.com/a/13206256/670028
https://stackoverflow.com/search?q=servicestack+session+per+request
2) You should always interact with NHibernate within a transaction.
Please see any of the following for an explanation of why:
http://ayende.com/blog/3775/nh-prof-alerts-use-of-implicit-transactions-is-discouraged
http://www.hibernatingrhinos.com/products/nhprof/learn/alert/DoNotUseImplicitTransactions
Note that when switching to using transactions with reads, be sure to make yourself aware of NULL behavior: http://www.zvolkov.com/clog/2009/07/09/why-nhibernate-updates-db-on-commit-of-read-only-transaction/#comments
So far I found that MEF is going well with presentation layer with following benefits.
a. DI (Dependency Injection)
b. Third party extensibility (Note that all parties involved should use MEF or need wrappers)
c. Auto discovery of Parts (Extensions)
d. MEF allows tagging extensions with additional metadata which facilitates rich querying and filtering
e. Can be used to resolve Versioning issues together with “DLR and c# dynamic references” or “type embedding”
Pls correct me if I’m wrong.
I'm doing the research on whether to use MEF in Service layer with WCF. Pls share your experience using these two together and how MEF is helping you?
Thanks,
Nils
Update
Here is what my result of research so far. Thanks to Matthew for helping in it.
MEF for the Core Services - cost of changes are not justifying the benefits. Also this is big decision and may affect the service layer in good or bad way so needs lot of study. MEF V2 (Waiting for stable version) might be better in this case but little worried about using MEF V1 here.
MEF for the Function service performs - MEF might add the value but it’s very specific to the service function. We need to go deep into requirement of service to take that decision.
Study is ongoing process, so everyone please share your thoughts and experience.
I think any situation that would benefit from separation-of-concerns, would benefit from IoC. The problem you face here is how you require MEF to be used within your service. Would it be for the core service itself, or some function the service performs.
As an example, if you want to inject services into your WCF services, you could use something similar to the MEF for WCF example on CodePlex. I haven't looked too much into it, but essentially it wraps the service location via an IInstanceProvider, allowing you to customise how your service type is created. Not sure if it supports constructor injection (which would be my preference) though...?
If the WCF service component isn't where you want to use MEF, you can still take advantage of MEF for creating subsets of components used by the service. Recently for the company I work for, we've been rebuilding our Quotation process, and I've built a flexible workflow calculation model, whereby the workflow units are MEF composed parts which can be plugged in where needed. The important part here would be managing how your CompositionContainer is used in relation to the lifetime of your WCF service (e.g. Singleton behaviour, etc.). This is quite important if you decide to create a new container each time (container creation is quite cheap, whereas catalog creation can be expensive).
Hope that helps.
I'm working on a solution where the MEF parts that I want to use across WCF calls are stored in a singleton at the application level. This is all hosted in IIS. The services are decorated to be compatible with asp.net.
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
In Global.asax, I import the parts.
[ImportMany(typeof(IOption))]
public IEnumerable<IOption> AvailableOptions{ get; set; }
After initializing the catalog and container, I copy the imported objects to my singleton class.
container.ComposeParts(this);
foreach (var option in AvailableOptions)
OptionRegistry.AddOption(option);
EDIT:
My registry class:
public static class OptionRegistry
{
private static List<IOption> _availableOptions= new List<IOption>();
public static void AddOption(IOption option)
{
if(!_availableOptions.Contains(option))
_availableOptions.Add(option);
}
public static List<IOption> GetOptions()
{
return _availableOptions;
}
}
This works but I want to make it thread safe so I'll post that version once it's done.
Thread-safe Registry:
public sealed class OptionRegistry
{
private List<IOptionDescription> _availableOptions;
static readonly OptionRegistry _instance = new OptionRegistry();
public static OptionRegistry Instance
{
get { return _instance; }
}
private OptionRegistry()
{
_availableOptions = new List<IOptionDescription>();
}
public void AddOption(IOptionDescription option)
{
lock(_availableOptions)
{
if(!_availableOptions.Contains(option))
_availableOptions.Add(option);
}
}
public List<IOptionDescription> GetOptions()
{
return _availableOptions;
}
}
A little while ago i was wondering how I could create a WCF web service that will get all of its dependencies wired by MEF but that i wouldnt need to write a single line of that wire up code inside my service class.
I also wanted it to be completely configuration based so i could just take my generic solution to the next project without having to make code changes.
Another requirement i had was that i should be able to unit-test the service and mock out its different dependencies in an easy way.
I came up with a solution that ive blogged about here: Unit Testing, WCF and MEF
Hopefully will help people trying to do the same thing.