We manage an application that consumes a number of external services as part of its general operation. Some services are Soap Services, others Restful Apis. Some services are also managed by us, others are third party services. Some services are central to the application's functionality, others are more auxiliary/non-mandatory.
Each external service exposes a 'test' and 'live' environment. We currently follow the policy that when our application is under test (that's development, testing and staging phases), it should consume the test version of the external service. It is only in our live environment that the live versions of the services are consumed.
There is a not-insignificant amount of overhead in managing which version of the service to consume between environments, but this is not the issue. My question is whether or not this policy is a good idea? Would we be better served instead by always consuming live versions of external services? Have we made the mistake of exposing the test versions of the external services we manage ourselves, i.e. should test environments remain private?
We have not (yet) been burned by not pointing to live external services until the application reaches 'live' but I accept that part of our problem is that we lack the granularity in our environments - by grouping development, testing and stage under the 'test' umbrella, we lose the ability to test against live external services.
All I realise at the moment is that there is little to be gained by consuming the test services in the test environments. There is negligible cost involved in consuming live third-party external services. Also, there is potential impact for our own services to be aware that they are being consumed by a client in the 'test' phase, but this could probably be accommodated.
I understand that the scenario is this somewhat open-ended, but there only seems to be 2 ways to go?
My concern would be accidentally modifying production data when running non-production instances of your application. As soon as you do one SetX(), POST/PUT, insert into/update, what have you, you are up the creek. That's a sneaky kind of bug that can be very hard to find.
If you're strictly consuming, then in theory it doesn't make a difference. In practice, I'd still be concerned. In your position, I'd probably be quite happy to have a non-live option. Otherwise I'd be thinking about stubbing out all those external services.
Related
I am in a situation where I can use Service Fabric (locally) but cannot leverage Azure Service Bus (or anything "cloud"). What would be the corollary for queuing/pub-sub? Service Fabric is allowed since it is able to run in a local container, and is "free". Other 3rd party messaging infrastructure, like RabbitMQ, are also off the table (at the moment).
I've built systems using a locally grown bus, built on MSMQ and WCF, but I don't see how to accomplish the same thing in SF. I suspect I can have SF services use a custom ICommunicationListener that exposes msmq, but that would only be available inside the cluster (the way I understand it). I can build an HTTPBridge (in SF) in front of those to make them available outside the cluster, but then I'd lose the lifetime decoupling (client being able to call a service, using queues, even if that service isn't online at the time) since the bridge itself wouldn't benefit from any of the aspects of queuing.
I have a few possibilities but all suffer from some malady that only exists because of SF, locally. Also, the same code needs to easily deploy to full Azure SF (where I can use ASB and this issue disappears) so I don't want to build two separate systems just because of where I am hosting it in some instances.
Thanks for any tips.
You can build this yourself, for example like this. This uses a BrokerService that will distribute message-data to subscribed services and actors.
You can also run a containerized queuing platform like RabbitMQ with volumes.
By running the queue system inside the cluster you won't introduce an external dependency.
The problem is not SF, The main issue with your design is that you are coupling architectural requirements to implementations. SF runs on top of VirtualMachines, in the end, the only difference is that SF put the services in those machines, using another solution you would have an Agent Deploying these services in there or doing a Manual deployment. The challenges are the same.
It is clear from the description that the requirement in your design is a need for a message queue, the concept of queues are the same does not matter if it is Service Bus, RabbitMQ or MSMQ. Each of then will have the basic foundations of queues with specifics of each implementation, some might add transactions, some might implement multiple patterns, and so on.
If you design based on specific implementation, you will couple your solution to the implementation and make your solution hard to maintain and face challenges like you described.
Solutions like NServiceBus and Masstransit reduce a lot of these coupling from your code, and if you think these are not enough, you can create your own abstraction. Then you use configurations to tied your business logic to implementations.
Despite the above advice, I would not recommend you using different
solutions per environment, because as said previously, each solution
has it's own implementations and they might not assimilate to each other, as example, you might face issues in
production because you developed against MSMQ on DEV and TEST
environments, and when deployed to Production you use ServiceBus, they
have different limitations, like message size, retention period and son
on.
If you are willing to use MSMQ, you can add MSMQ to the VMs running your cluster and connect from your services without any issue. Take a look into this SO first: How can I use MSMQ in Azure Service Fabric
Normally I host my WCF services in IIS but I've been told by a colleague that services run better (performance wise) when hosted in Local services.
Is this true? What are the pros and cons for each?
Without knowing how your services are designed, how much CPU/memory they consume, how many concurrent clients they support, how they are accessed, etc., it's hard to make a general statement about which method is better/faster. So I'll share my previous experience.
Initially we hosted our WCF services in local Windows Services, after doing some rudimentary performance testing against the two WCF deployment methods. Hosting them in Win Svcs was slightly faster (not noticeably). .NET 4.0/REST/wsHTTPBinding/total 3,000 conc. calls/load balanced sever farm/memory intensive/default WCF settings (initially didn't tweak them).
Then we noticed the memory on our WCF servers was maxing out several days after starting the service and when it happened our services generated strange exceptions sporadically. We turned on perf counters on our servers. That's when we learned that perf profiling WCF services hosted in Win Svc didn't give us a whole lot of insight because many perf counters simply didn't return any info at all, which was confirmed by Microsoft Tech Support team. We also used ANTS to look for memory leaks but didn't find any major issue in our code. We then started tweaking WCF settings (e.g. maxbufferpoolsize) attentively with the help from Microsoft consultants. Ultimately we came to the conclusion that GC wasn't happening frequently enough to free up allocated memory. We even tried switching to server mode GC from workstation mode, which actually ended up worsening the problem.
As the last resort, we switched to IIS. The performance of the service didn't get any better, which was fully expected. However, some of the IIS-specific perf counters confirmed our suspicion about GC not happening frequently enough. We then found this wonderful setting in IIS that allowed us to specify when and how often to recycle an app pool. Yes, we could have developed a simple custom solution to restart our WCF services but why reinvent the wheel, we thought. Additionally, when you recycle an app pool in IIS, IIS doesn't kill it abruptly. Instead, it creates a new one to handle subsequent requests while the old one stays a live for a configurable amount of time to finish processing all outstanding requests. That built-in capability allowed us to maintain our uptime SLA.
Based on my experience, I would suggest you keep them in IIS unless you really really really need to squeeze for that last bit of juice from your servers.
I am working with silverlight project that is consuming domain services. Actually i find that quite messy as one domain service class and metadata. I have already worked with Wcf services and found them very easy to update and handle. But domain service's modification (as new field or tables are added) is really a pain.
I want to know why people prefer domain services over silverlight enabled Wcf services? I mean advantages or disadvantages of both and performance implication
After goggling i found this are things you should see :
To authenticate users faster in the domain
To authenticate resources(gps etc) faster for the users
Utilization of resources
Utilization of network and descreasing the overall traffic in the
network.
The main benefit is that of the users and passwords management, which
could grow to be massive amount of work having to manage them
individually on each independent servers. The proposed changes of
migrating the whole platform to Active Directory environment will
assist in propagating the changes (such as new users, password
changes, new security requirements via GPO, etc) on to the servers
(which will run as domain clients, only 1 or 2 will run Primary and
Secondary ADC. Not all these servers are going to run host AD or be
an ADC, server OS is used due to it's robustness and reliability).
disadvantage
cost of infrastructure
good planning is must
Complex structure for user
So, in my WCF service, I will be caching some data so future calls made into the service can obtain that data.
what is the best way in WCF to cache data? how does one go about doing this?
if it helps, the WCF service is multithreaded (concurrency mode is multiple) and ReleaseServiceInstanceOnTransactionComplete is set to false.
the first call to retrieve this data may not exist therefore it will go and fetch data from some source (could be DB, could be file, could be wherever) but thereafter it should cache it and be made available (ideally with an expiry system for the object)
thoughts?
Some of the most common solutions for a WCF service seem to be:
Windows AppFabric
Memcached
NCache
Try reading Caching Solutions
An SOA application can’t scale effectively when the data it uses is kept in a storage that is not scalable for frequent transactions. This is where distributed caching really helps. coming back to your question and its answer by ErnieL, here is a brief comparison of these solutions,
as Far as Memcached is concerned, If your application needs to function on a cluster of machines then it is very likely that you will benefit from a distributed cache, however if your application only needs to run on a single machine then you won't gain any benefit from using a distributed cache and will probably be better off using the built-in .Net cache.
Accessing a memcached cache requires interprocess / network communication, which will have a small performance penalty over the .Net caches which are in-process. Memcached works as an external process / service, which means that you need to install / run that service in your production environment. Again the .Net caches don't need this step as they are hosted in-process.
if we compare the features of NCache and Appfabric, NCache folks are very confident over the range of features which they ve compared to AppFabric. you can find enough material here regarding the comparison of these two products, like this one......
http://distributedcaching.blog.com/2011/05/26/ncache-features-that-app-fabric-does-not-have/
I've heard a lot of people touting success using Linux based proxies to handle routing for high availability of web applications, but what are others doing with web services? I have a bank of WCF services that need to be moved to a high availability (failover) model, meaning that if a particular server hosting the WCF services goes down, the request is routed to another of the servers in the bank. I would rather stay away from implementing a Linux based solution, since there are no Linux knowledgeable people in the environment.
If you don't need durability, you can load balance WCF service requests just like normal web requests without doing anything special. If you need durability and want requests to survive being cut off mid-process, use the netMsmqBinding.
I would rather stay away from
implementing a Linux based solution,
since there are no Linux knowledgeable
people in the environment.
This is probably a strong enough reason to not use a Linux-based solution. Doing what you describe well requires reasonable expertise beyond a simple recipe approach, and substantial maintenance.