I have a web service developed in WCF and its deployed as a Web Farm(3 servers). We are trying to implement caching using MemoryMappedFile. How does memory mapping behave in Web Farm. Is there any option to manage the MemoryMappedFile across servers?
If you task is to do caching between servers, memory mapped file will not be good solution.
You can use proper caching solution like Redis for that.
You will also need c# library to communicate with Redis, you can use this one: https://servicestack.net/redis
Related
I cannot seem to find any combination of tutorials or information online to set me in the right direction, so I'm hoping the community can help me out!
I have some experience with WCF in the past (mostly simple/default http implementations), but nothing to the level I am attempting with my current architecture. Unfortunately 99% of the info I'm finding for WCF is a couple of years old, and most of it does not address Azure specific details. Most books are published back in 2007, and do not address the newer IDE/Tooling or WCF updates since that time. Needless to say I have a few open questions, and would love to get pointed in the right direction after exhausting Google, Stack Overflow, MSDN & YouTube!
In a nutshell:
I want to centralize all business logic behind a single WCF service
on Azure (it will be load balanced on a Cloud Service).
I have a number of web clients that will be consuming this service.
All the clients are C#/.NET MVC projects that I control (I do not need or want the
WCF endpoints to be publicly available)
I would prefer to whitelist access to the endpoints, rather than
implement authentication (for performance & simplicity)
Hear are my questions and potential speed bumps:
Is WCF the right solution? Is there a newer better technology I should be using?
If I use a Cloud Service for my WCF solution, is WebRole or WorkerRole my best option and why? Are hosting the service as a Website an option? (It would save cost)
In my research I've landed on the fact that using NetTCP binding is faster than using the default Http bindings. But I can't find a simple example of how to set this up using VS 2013/.Net 4.5/Azure Cloud Service. Is there a good tutorial for this? Also, I'm assuming NamedPipes are not on option for me?
Since all the consumers of the WCF service will be running on Azure Websites, is NetTCP still possible? How do I create service references? I'm assuming I just use the NetTCP endpoint address, but what about whitelisting for security within the Azure infrastructure?
How can my Azure Website clients connect to TCP within Azure the fastest? Affinity groups don't seem to be an option for Websites, should I abandon this and deploy all my clients as WebRoles so they can share Affinity with my WCF Service? Is Azure smart enough to know that the website is calling a machine within the same region and keep the connection within the region? How is this ensured?
I will have a debug, stage and production environment for my WCF service. What is the best way to switch between the various endpoints on my azurewebsite client(s)? I'd prefer to do it during startup in my global.asax file using C#, rather than in my web.config. I only intend to keep one setting in my Web.Config for "Environment". Ideally I will have a Switch() statement in my startup file that will determine with WCF environment endpoint to use for my Service References.
My apologies for the array of questions. I was thinking about breaking this out into multiple posts, but keeping them in the same context seemed to be the only way to ensure that I am communicating the scope of my inquiry.
Thank you.
I found a great series of videos on Microsoft Virtual Academy that answers all of my questions:
Azure & Services
The key videos in this series are: 1,2 & 7. Here is a direct link to each one:
Intro to WCF
WCF on Azure
Advanced Topics
I'm designing a software system which has some C++ projects and java web applications hosted on Apache/Tomcat. Native code[C++ outputs] will connect to other systems[DB, External Gateways, etc] through web apps as HTTP requests. In order to make a good distributed/modular system, I'm planning to use several [5 to 10] web applications.
But still my system is not finished its developments, but function enough to sell. But even still 20% of its full features, I have to go through a huge deployment procedure since it has much of web apps.
My question is,
Is it good to merge few web apps TEMPORARILY to reduce deployment overhead[I can do this till I get a significant larger source for each] and do http requests within that same web application?
Will it be cause any performance/memory/threading issue?
if you are merging two or three web components and want to deploy an single jvm
than you should not use http request between web components,
for this you can use jboss osgi http://www.jboss.org/jbossas/subprojects/osgienter link description here
The solution I found was to use a hosted JVM, which is an application on either a Servlet container or in a web service.
This way, a single JVM is re-used.
But the problem here is that, you need a communication mechanism between two applications, which I prefer of using TCP sockets.
I'm setting up an AppFabric caching cluster on a small webfarm (5 web servers).
The caching cluster is installed on the same servers that run the IIS, if that matters.
I only use the AppFabric cache for my Model layer, meaning mostly business logic objects created from database queries. No page caching or similar.
This works just fine when enabled on the main website.
However on one of the 5 web servers there's a second IIS site, which hosts a couple of services, amongst others 3 WCF endpoints, as well as 2 old-school ASMX webservices.
When I enabled the AppFabric caching for this site, it tears the whole cluster down. A call to Get-CacheClusterHealth shows all 5 are completely gone (100% in Unallocated named cache fractions)
The Model code is actually the exact same DLLs as we use for the main website, so I doubt it's anything in the code (since the main site works)
I noticed this error in IIS -> AppFabric Dashboard: Error occurs while parsing service file myendpoint.svc
So that got me thinking: Could this be caused by the WCF endpoints somehow ?
There is a related question to this here:-
AppFabric Cache server and web application on same physical machine
Microsoft don't recommend having cache nodes being dual use (also hosting applictions). This could be the cause of your problem. We use an appfabric cache cluster but we dedicate them to appfabric and nothing else. See the article from MS here:-
AppFabric Caching Physical Architecture
I am looking for suggestion for hosting my WCF enterprise application.
The app. require to run without stopping at the server. It also use TCP to yield the best performance at the intranet environment.
I am thinking to host it at window service because IIS recycle process, and has timeout.
However, I find this from the msdn http://msdn.microsoft.com/en-us/library/ff649818.aspx :
Window service...Lack of enterprise features. Windows services do not have the security, manageability, scalability, and administrative features that are included in IIS.
Does it mean Window Service is not suitable for enterprise application? But How about MS SQL, Oracle, MySQL etc. They all host at Win. Service right?
Regards
Bryan
Windows service is suitable for enterprise application! The quoted text actually means that IIS has a lot of built-in management features which are not available in custom hosting (like windows service) unless you implement them at your own.
One of such features is the recycling you want to avoid which helps application to keep low resource consumption (server is in healthy state). Another such feature is IIS checking of the worker state. If worker process looks stuck (don't process requests for any reason), IIS will start automatically another process and routes new requests to that process.
IIS + WAS + AppFabric can provide very big feature set but they are not good for every scenario. If you have service which requires some background continuous, scheduled or multi threaded processing it is probably better to move to self hosted scenario.
What would be reasons that you want to host a wcf service in a windows service and not in IIS?
One reason is that IIS6 only supports bindings based on HTTP. If you want to use TCP, MSMQ, etc., then you need to host in a separate program.
When hosting in IIS you are only allowed to bind to a single port per a base address, in each web site (Meaning you cant specify two bindings with different ports since you can only use a single port, or endpoints that use different ports)
You can only use a single base address in IIS, the only way around this is deploying multiple versions of the same project in different websites (yuck)
The IIS process must recycle eventually, and when it does it dumps everything and restarts, which is good alot of the time since memory is freed and trapped resources are released, but when using singletons this can have an undersired effect depending on your code
[edit] : more points
In a standard setup your worker process always have 2GB Virtual memory available (no matter if you have 1, 2 or 4GB physical memory in the machine).
Freedom. You as the developer don't need someone to administer the box
Sometimes IIS6 is really just overkill
You are using it as interprocess communication conduit
You wish to declare all of the bindings in code. This is far less confusing and more powerful than the xml config files that seem to be all the rage. I can't envision many scenarios where I would want a non-programmer messing with bindings. The xml approach is fine for prototyping and systems that need to be highly dynamic, but overall I don't think its a good idea