I need to implement a lightweight sharding service over http using AKKA HTTP. To illustrate the idea please let me introduce a fake service with only one REST resource list called /users/. Customer can create new users by POSTing to the list, or query existing useers by its ID /users/:userID.
The sharding service simply routes to the right shard (concrete service). For POST it creates a new ID and decides which service will handle, for GET or DELETE it gets the ID of the user and routes to the one handling it.
The following image shows how it works.
I am new to the AKKA HTTP framework, but given the concrete service is already implemented, I just have to implement a sort of transparent proxy in front that will take requests from client, and then forwards the right concrete service according to the routing strategy.
Is there any simple implementation in akka that does just that?
This link is a repo containing a solution that does a round robin for a reverse proxy implemented using akka htttp. A small shange to handle sharding instead will solve this question.
Related
We are trying to build a Nservicebus service that can communicated with form and wpf based clients using WCF. I have read that you can inherit from WcfService.
like:
public class ThirdPartyWebSvc : WcfService<ThirdPartyCmd, ThirdPartyCmdResponse>
And then you simple create a endpoint in the app.config and you done like described here. but the problem is that i have to create a endpoint for every command.
I would like to have a single endpoint that excepts any command and returns its response.
public class ThirdPartyWebSvc : WcfService<ICommand, IMessage>
Can someone point me in the right direction? Using Nservicebus for client communication can't be done for us and i don't want to build a proxy like server unless thats the only way to do it.
Thanks
So from what I can gather, you want to expose a WCF service operation which consumers can call to polymorphically pass one of a number of possible commands to, and then have the service route that command to the correct NServiceBus endpoint which then handles the command.
Firstly, in order to achieve this you should forget about using the NserviceBus.WcfService base class, because to use this you must closely follow the guidance in the article you linked in your post.
Instead, you could:
design your service operation contract to accept polymorphic requests by using the ServiceKnownType attribute on your operation definition, adding all possible command types,
host the service using a regular System.ServiceModel.ServiceHost(), and then configure an NserviceBus.IBus in the startup of your hosted WCF service, and
define your UnicastBusConfig config section in your service config file by adding all the command types along with the recipient queue addresses
However, you now have the following drawbacks:
Because of the requirement to be able to pass in implementations of ICommand into the service, you will need to recompile your operation contract each time you need to add a new command type.
You will need to manage a large quantity of routing information in the config file, and if any of the recipient endpoints change, you will need to change your service config.
If your service has availability problems then no more messages to any of your NSB endpoints.
You will need to write code to handle what to do if you do not receive a response message from the NSB endpoints in a timely manner, and this logic may depend on the type of command sent.
I hope you are beginning to see how centralizing this functionality is not a great idea.
All the above problems would go away if you could get your clients to send commands to the bus in the standard way, but without msmq how can you do that?
Well, for a start you could look at using one of the other supported transports.
If none of these work for you and you have to use WCF hosted services, then you must follow the guidance in the linked article. This guidance is there to steer you in the correct direction - multiple WCF services sounds like a pain, until you try to centralize them into a single service - then the pain gets bigger, not less.
Trying to figure out a pattern for how to handle relationships when using a hypermedia based microservices based on Spring Data Rest or HATEOAS.
If you have service A (Instructor) and Service B (Course) each exist as an a stand alone app.
What is the preferred method for establishing a relationship between the two services. In a manner that does not require columns for IDs of the foreign service. It would be possible for each service to have many other services that need to communicate in the same manor.
Possible solution (Not sure a correct path)
Each service has a second table with a OneToMany with the primary entity within the service. The table would have the following fields:
ID, entityID, rel, relatedID
Then in the opposite service using Spring Data Rest setup a find that queries the join table to find records that match.
The primary goal I want to accomplish would be any service can have relationships with any number of other services without having to have knowledge of the other service.
The basic steps are the following ones:
The service needs to discover the resources of the other service.
The service then adds a link to the resources it renders where necessary.
I have a very rudimentary example of these steps in this repository. The example consists of two services: a service to provide geo-spatial searches for stores. The second service is some rudimentary customer management that optionally integrates with store service if it is currently available.
Here's how the steps are implemented:
Resource discovery
In my example the consuming service (i.e. the customer one) uses Spring HATEOAS' Traverson API to traverse a set of link relations until it finds a link named by-location. This is done in StoreIntegration. So all the client service needs to know is the root URI (taken from the environment in my case) and a set of link relations. It periodically checks the link for existence using a HEAD-request.
This of course can be done in a more sophisticated manner: hard-wiring the base URI into the client service might be considered suboptimal but actually works quite well if you're using DNS anyway (so that you can exchange the actual host behind the URI hard-coded). Nonetheless it's a decent pragmatic approach, still rediscovers the other service if it changes URIs, no additional libraries required.
For an even more sophisticated approach have a look at Netflix' Eureka library which is basically a service registry. Also, you might wanna checkout the Spring Cloud integration we have for that.
Augmenting resources with links
Spring HATEOAS provides a ResourceProcessor API that Spring Data REST leverages. It allows you to manipulate the Resource instance about to be rendered and e.g. add links to it. The implementation for the customers service can be found here.
It basically takes the link just discovered in the steps above and expands it using well-known parameters and thus allows clients to trigger a store geo-search by just following the link.
Beyond that
You can find a more sophisticated variant of this example in the examples projects for Spring Cloud. It takes the very same example but switches to Spring Cloud components such as Eureka integration, gathering metrics, adding UI etc.
In my case I can only derive related items from the service itself. My goal is to abstract the related items to the point that any number of services can be related to a service and only need to lookup ID's or links. One thought was an #ElementCollection named related with a join of the entity ID of the service. Then in the #Embedded have a relLink field and a relatedID field. Then in the repository do a findby to find the relLink and relatedID.
The hope is to keep it abstracted enough to essentially mimic a Many to Many setup.
I've been given a WDSL file and have to create a web service client using axis2. I've been able to generate the CallbackHandler and Stub using WSDL2java. I've tried following this tutorial to create the Client http://briansjavablog.blogspot.com.au/2013/01/axis2-web-service-client-tutorial.html
I'm not sure if I implemented the client properly. It runs, but I'm not sure how you view any output results. I've never dealt with web services before. The Stub file that was generated contains so much code, how am I supposed to know what I should be calling? All tutorials I've found give example Clients, but I want to know what I need to look at to create my own.
If anyone has any advice or links to creating clients that are easy to understand, it would be appreciated.
I think that this probably went un-answered for a while due to the fact that the question is not clear and you probably need an introduction to Web Services and SOAP in general. If you are given the WSDL (or can pull it from a URL out there somewhere) then you are using the Web Service as a client - you have (from the post) already created the stub for client use. You simply need to use it. You are sending a request to the server (Web Service) and sending it the data that it requires (as the SOAP parameters that are laid out in the Web Service schema). Based on this SOAP request you will get a response. Your stubs that are created for the client act as the invocation and response points for your client.
So your question as to how do you test it: you decide what to do with the response as this is what you are coding into the client.
And about creating your own Web Service - you would need to start with a schema (often times you write your objects/data and the functions that you want them to perform and tools (like Axis2) will generate the server code (for Web Services and SOAP transport) on top of this.
So in your question, I think that you need to a) check out some Web Services books/online tutorials to figure out what it is, b) code your client to display the results and stuff - and just make sure that you are actually sending and getting responses from the Web Service, and c) also see what it would take to create your own Web Service (for whatever purpose you are planning the service to be established for, before creating your own.
Effectively I think that you just need to get your feet wet with Web Services in the first place. And the tutorial that you pointed out ( http://briansjavablog.blogspot.com.au/2013/01/axis2-web-service-client-tutorial.html) is excellent for anyone looking to get a web services client started - thanks for posting that.
We have a service that makes NetTcp calls to other services on different servers. I'm trying to optimize it by caching the ChannelFactories, and by increasing maxOutboundConnectionsPerEndpoint.
I have been using GOOGLE.COM (great website, by the way. Check it out if you haven't heard of them) to try to understand how the channel caching works, and I don't think I have it correct.
Reusing the channel factory - good idea.
Caching channels myself - bad idea. Wcf already does this, as long as all of your channels are created from the same factory. - is that right? If you create a new factory each time, it's not cached?
// factory initialization
var address = "net.tcp://something:8888/testservices";
var factory = new ChannelFactory<ITestService>("configname", new EndpointAddress(address));
// do stuff
var client = factory.CreateChannel(); // you can also pass an address here. how does that effect channel caching?
So that handles the factory stuff. Now, I want to increase maxOutboundConnectionsPerEndpoint, which is achieved via a custom binding. The thing that's not clear there is the GroupName. I can't specify distinct group names in configuration because I'm looking up the URL at runtime. Thus, all channel factories are going to have the same default group name. Are they going to share the same pool, does each factory have it's own?
In the end, I would like to create the custom binding in configuration, then create/cache multiple channel factories as needed, and use them to establish channels. And, I need to increase the maxOutboundConnectionsPerEndpoint per endpoint. I may be calling the same service on 20 different machines, and I want up to 50 cached channels for each. Does the code above achieve that?
Thank you for your help. If there is a link that covers this, please forward. I haven't been able to find it.
Just started my first WCF rest project and would like some help on what are the best practices for using REST.
I have seen a number of tutorials and there seems to be a number of ways to do things...for example if doing a POST, I have seen some tutorials which are setting HttpStatusCodes (OK/Errors etc), and other tutorials where they are just returning strings which contain result of the operation.
At the end of the day, there are 4 operations and surely there must be a guide that says if you are doing a GET, do it this way, etc and with a POST, do this...
Any help would be appreciated.
JD
UPDDATE
Use ASP.NET Web API.
OK I left the comment REST best practices: dont use WCF REST. Just avoid it like a plague and I feel like I have to explain it.
One of the fundamental flaws of the WCF is that it is concerned only with the Payload. For example Foo and Bar are the payloads here.
[OperationContract]
public Foo Do(Bar bar)
{
...
}
This is one of the tenants of WCF so that no matter what the transport is, we get the payload over to you.
But what it ignore is the context/envelope of the call which in many cases transport specific - so a lot of the context get's lost. In fact, HTTP's power lies in its context not payload and back in the earlier versions of WCF, there was no way to get the client's IP Address in netTcpBinding and WCF team were adamant that they cannot provide it. I cannot find the page now but remember reading the comments and the MS guys just said this is not supported.
Using WCF REST, you lose the flexibility of HTTP in expressing yourself clearly (and they had to budge it later) in terms of:
HTTP Status code
HTTP media types
ETag, ...
The new Web API, Glenn Block is working addresses this issue by encapsulating the payload in the context:
public HttpResponse<Foo> Do(HttpRequest<Bar> bar) // PSEUDOCODE
{
...
}
But to my test this is not perfect and I personally prefer to use frameworks such as Nancy or even plain ASP NET MVC to expose web API.
There are some basic rules when using the different HTTP verbs that come from the HTTP specification
GET: This is a pure read operation. Invocation must not cause state change in the service. The response to a GET may be delivered from cache (local, proxy, etc) depending on caching headers
DELETE: Used to delete a resource
There is sometimes some confusion around PUT and POST - which should be used when? To answer that you have to consider idempotency - whether the operation can be repeated without affecting service state - so for example setting a customer's name to a value can be repeated multiple times without further state change; however, if I am incrementing a customer's bank balance this cannot be safely be repeated without further state change on the service. The first is said to be idempotent the second is not
PUT: Non-delete state changes that are idempotent
POST: Non-delete state changes that are not idempotent
REST embraces HTTP - therefore failures should be communicated using HTTP status codes. 200 for success, 201 for creation and the service should return a URI for the new resource using the HTTP location header, 4xx are failures due to the nature of the client request (so can be fixed by the client changing what they are doing), 5xx are server errors that can only be resolved server side
There's something missing here that needs to be said.
WCF Rest may not be able to provide all functionality of REST protocol, but it is able to facilitate REST protocol for existing WCF services. So if you decide to provide some sort of REST support on top of the current SOAP/Named pipe protocol, it's the way to go if the ROI is low.
Hand rolling full blown REST protocol maybe ideal, but not always economical. In 90% of my projects, REST api is an afterthought. Wcf comes in quite handy in that regard.