I need to use Quartz for a time-consuming task to update data in my project, I'm afraid that adding the workers to the web API will limit the performance of the web API when the tasks are running in the background. I'm hosting my web API in Amazon so I can just beef it up or I deploy this project to another server to handle the background jobs in another service.
Hosting the Workers and WebApi on the same server will probably be cheaper. But I know that deploying them separately will make fixes much easier to deploy.
I'm afraid that adding the workers to the web API will limit the performance of the web API when the tasks are running in the background. I'm hosting my web API in Amazon so I can just beef it up or I deploy this project to another server to handle the background jobs in another service.
If your background task would do CPU-intensive or I/O-intensive etc jobs, hosting the Workers and WebApi application(s) on the same server, which might result in resource contention and lead to low performance.
On the other hand, isolating your app (or workers) into separate server (or service) in Amazon, which would take additional charge. You can monitor the metrics of CUP, memory etc usage first, then determine if current hosting approach is ok.
Related
I'm currently working on a high-traffic legacy Hibernate (5.0) web application that relies heavily on L2 caching with ehcache. Let's say this web application currently also contains a thread pool that writes data. The tasks in this thread pool don't benefit of L2 caching but cause plenty of invalidations. Now we'd like to take this thread pool out of the legacy app and put it in its own Java process, ideally on a different server.
Would it be possible to configure Hibernate/Infinispan in a way that the new task process doesn't have an L2 cache (or one with no actual capacity) but sends invalidations to the web app while at the same time the web app doesn't send any invalidations to the task process.
I'm designing a software system which has some C++ projects and java web applications hosted on Apache/Tomcat. Native code[C++ outputs] will connect to other systems[DB, External Gateways, etc] through web apps as HTTP requests. In order to make a good distributed/modular system, I'm planning to use several [5 to 10] web applications.
But still my system is not finished its developments, but function enough to sell. But even still 20% of its full features, I have to go through a huge deployment procedure since it has much of web apps.
My question is,
Is it good to merge few web apps TEMPORARILY to reduce deployment overhead[I can do this till I get a significant larger source for each] and do http requests within that same web application?
Will it be cause any performance/memory/threading issue?
if you are merging two or three web components and want to deploy an single jvm
than you should not use http request between web components,
for this you can use jboss osgi http://www.jboss.org/jbossas/subprojects/osgienter link description here
The solution I found was to use a hosted JVM, which is an application on either a Servlet container or in a web service.
This way, a single JVM is re-used.
But the problem here is that, you need a communication mechanism between two applications, which I prefer of using TCP sockets.
Even if this question is a year old, I am still searching a good answer for this question. I appreciate any information that will lead me to fully understand this issue regarding low performances of communicating web services hosted on the same machine.
I am currently developing a system with several WCF Web Services that communicate intensively.
They are running under IIS7, on the same machine, each service being in a different Application Pool, with multiple workers in the Web Garden.
During the individual evaluation of each Web Service, I can serve 10000-20000 requests per minute, quickly and without any issues for the resource consumption (processor and memory).
When I test the whole system or just a subsystem formed by two Web Services I can't serve more than 2000 requests/minute.
I also observed that communication time between Web Service is a big issue (sometimes more than 10 seconds). But when testing with only 1000 requests per minute everything goes smoothly (connection time of no more than 60 ms).
I have tested the system both with SOAPUI and JMETER, but the times were computed based on system logs, not from the testing tools.
Memory and network aren't an issue (they are used very little).
Later on, I have tested the performance of 2 communicating WCF web services, hosted on two server and on the same server. It again seems that there is a bottleneck when the services are on the same machine, lowering the number of connection with from ten thousands to thousands; again, no memory or processor limiting.
As a note, I am working with quite big data in some cases and some of the operations needed are long ones.
I used perf.mon to see what's going on, for memory, processes, webservice, aspnet, etc. but I didn't see anything that could indicate what it's going wrong.
I also tried all the performance settings and tuning options I could find on the Internet.
Does someone know what can be wrong? Why the communication between Web Services could last so long? Why the Web Service which serves as an entry point in the system can accept 10000 requests/minute when is tested alone, but when communicating with another Web Service barely accepts 2000?
It's an IIS7 problem? Could my system perform better if each Web Service will be deployed on a different server?
I want to understand better how things internally function (IIS and WCF services) to improve performances for current and future systems.
You could try to collect data from WCF performance counters : concurrent calls, instances, duration, ... In addition, WCF throttling provides some properties that you can use to limit how many instances or sessions are created at the application level. Performance of the WCF service can be improved by creating proper instance.
Finally, in load testing, there are many configuations to apply to different component : max concurrent http connection, IIS limits, having many load clients... You load test is invalidated because of this.
I am looking for suggestion for hosting my WCF enterprise application.
The app. require to run without stopping at the server. It also use TCP to yield the best performance at the intranet environment.
I am thinking to host it at window service because IIS recycle process, and has timeout.
However, I find this from the msdn http://msdn.microsoft.com/en-us/library/ff649818.aspx :
Window service...Lack of enterprise features. Windows services do not have the security, manageability, scalability, and administrative features that are included in IIS.
Does it mean Window Service is not suitable for enterprise application? But How about MS SQL, Oracle, MySQL etc. They all host at Win. Service right?
Regards
Bryan
Windows service is suitable for enterprise application! The quoted text actually means that IIS has a lot of built-in management features which are not available in custom hosting (like windows service) unless you implement them at your own.
One of such features is the recycling you want to avoid which helps application to keep low resource consumption (server is in healthy state). Another such feature is IIS checking of the worker state. If worker process looks stuck (don't process requests for any reason), IIS will start automatically another process and routes new requests to that process.
IIS + WAS + AppFabric can provide very big feature set but they are not good for every scenario. If you have service which requires some background continuous, scheduled or multi threaded processing it is probably better to move to self hosted scenario.
I'm developing a web application that needs to perform a task that consumes a lot CPU and Memory, and that also may last several minutes. In order to get a better user experience, I also developed a windows service that hosts a WCF service that performs this "high cost" task and that comunicates with the web app using msmq (message queues).
This worked great until I tried to make a load test... The windows service starts consuming a lot of resource, puttin the CPU to work at 100% and more than 1GB of memory. I've looked for optimizations and I've done a lot of tweaks to the code and I think that it is very efficient, but the task just requires a lot of resources.
The problem is that while the WCF service is working, the CPU gets used at 100% and the web app turns INCREDIBLY SLOW! I don't mind if the task that the WCF service does takes a couple of minutes more, but I want the web app to perform well for users.
So I'm wondering if there is a way to limit the resources that the WCF service can consume, giving priority to the web app.
Thanks in advance.
Juan
The easy solution would be to place the WCF service on a different machine.
The fact that the service is using alot of CPU is probably not related to you using WCF.
There are some ways that you may be able to improve the performance of your web app:
Process only one message at a time.
Break the jobs into smaller parts.
Set priority of the windows service to below normal in the task manager
Install more RAM on the server
I guess this is a problem of your Windows service design. When you decide to host WCF in Windows service you have to control resource utilization = you have to control throttling. You have to create configurable control over internal service processing so that you can change the load based on available resources. If you host WCF in IIS it already provides such control on AppPool level.
There are some freeware tools which allow limiting CPU usage for given process but that is not something I would recommend for production usage.
Best regards, Ladislav