I'm designing a software system which has some C++ projects and java web applications hosted on Apache/Tomcat. Native code[C++ outputs] will connect to other systems[DB, External Gateways, etc] through web apps as HTTP requests. In order to make a good distributed/modular system, I'm planning to use several [5 to 10] web applications.
But still my system is not finished its developments, but function enough to sell. But even still 20% of its full features, I have to go through a huge deployment procedure since it has much of web apps.
My question is,
Is it good to merge few web apps TEMPORARILY to reduce deployment overhead[I can do this till I get a significant larger source for each] and do http requests within that same web application?
Will it be cause any performance/memory/threading issue?
if you are merging two or three web components and want to deploy an single jvm
than you should not use http request between web components,
for this you can use jboss osgi http://www.jboss.org/jbossas/subprojects/osgienter link description here
The solution I found was to use a hosted JVM, which is an application on either a Servlet container or in a web service.
This way, a single JVM is re-used.
But the problem here is that, you need a communication mechanism between two applications, which I prefer of using TCP sockets.
Related
I need to use Quartz for a time-consuming task to update data in my project, I'm afraid that adding the workers to the web API will limit the performance of the web API when the tasks are running in the background. I'm hosting my web API in Amazon so I can just beef it up or I deploy this project to another server to handle the background jobs in another service.
Hosting the Workers and WebApi on the same server will probably be cheaper. But I know that deploying them separately will make fixes much easier to deploy.
I'm afraid that adding the workers to the web API will limit the performance of the web API when the tasks are running in the background. I'm hosting my web API in Amazon so I can just beef it up or I deploy this project to another server to handle the background jobs in another service.
If your background task would do CPU-intensive or I/O-intensive etc jobs, hosting the Workers and WebApi application(s) on the same server, which might result in resource contention and lead to low performance.
On the other hand, isolating your app (or workers) into separate server (or service) in Amazon, which would take additional charge. You can monitor the metrics of CUP, memory etc usage first, then determine if current hosting approach is ok.
I have a web service developed in WCF and its deployed as a Web Farm(3 servers). We are trying to implement caching using MemoryMappedFile. How does memory mapping behave in Web Farm. Is there any option to manage the MemoryMappedFile across servers?
If you task is to do caching between servers, memory mapped file will not be good solution.
You can use proper caching solution like Redis for that.
You will also need c# library to communicate with Redis, you can use this one: https://servicestack.net/redis
The setup at the current employer has one set of back office functions on a Java platform and another group of functions on two separate .NET-based platforms. There is no overall architect.
The Java guys decided to go for Apache QPID and AMQP for messaging, presumably amongst themselves, with the .NET systems and other external systems.
.NET architecture involves WCF services hosted in IIS/WAS and Windows Server AppFabric.
Does anyone have any experience of AmqpBinding and IIS/WAS, if there are any possible pitfalls?
I think your first problem will be IIS/WAS/AppFabric because non HTTP services hosted in WAS have additional requirements for infrastructure which consists of additional process (listener) running usually as as a windows service and communicating with worker process. This process is responsible for receiving and sending messages and allows service activation in WAS. I don't think that the QPID project has the listener process already created. You will most probably have to implement the listener yourselves - check this sample for custom UDP activator.
Even if this question is a year old, I am still searching a good answer for this question. I appreciate any information that will lead me to fully understand this issue regarding low performances of communicating web services hosted on the same machine.
I am currently developing a system with several WCF Web Services that communicate intensively.
They are running under IIS7, on the same machine, each service being in a different Application Pool, with multiple workers in the Web Garden.
During the individual evaluation of each Web Service, I can serve 10000-20000 requests per minute, quickly and without any issues for the resource consumption (processor and memory).
When I test the whole system or just a subsystem formed by two Web Services I can't serve more than 2000 requests/minute.
I also observed that communication time between Web Service is a big issue (sometimes more than 10 seconds). But when testing with only 1000 requests per minute everything goes smoothly (connection time of no more than 60 ms).
I have tested the system both with SOAPUI and JMETER, but the times were computed based on system logs, not from the testing tools.
Memory and network aren't an issue (they are used very little).
Later on, I have tested the performance of 2 communicating WCF web services, hosted on two server and on the same server. It again seems that there is a bottleneck when the services are on the same machine, lowering the number of connection with from ten thousands to thousands; again, no memory or processor limiting.
As a note, I am working with quite big data in some cases and some of the operations needed are long ones.
I used perf.mon to see what's going on, for memory, processes, webservice, aspnet, etc. but I didn't see anything that could indicate what it's going wrong.
I also tried all the performance settings and tuning options I could find on the Internet.
Does someone know what can be wrong? Why the communication between Web Services could last so long? Why the Web Service which serves as an entry point in the system can accept 10000 requests/minute when is tested alone, but when communicating with another Web Service barely accepts 2000?
It's an IIS7 problem? Could my system perform better if each Web Service will be deployed on a different server?
I want to understand better how things internally function (IIS and WCF services) to improve performances for current and future systems.
You could try to collect data from WCF performance counters : concurrent calls, instances, duration, ... In addition, WCF throttling provides some properties that you can use to limit how many instances or sessions are created at the application level. Performance of the WCF service can be improved by creating proper instance.
Finally, in load testing, there are many configuations to apply to different component : max concurrent http connection, IIS limits, having many load clients... You load test is invalidated because of this.
We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes.
Our application, or our system, as I should rather say, comes in two or three parts.
Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces.
Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients.
Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster.
Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services.
That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call?
Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email:
The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container?
As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following:
Because EJBs are inherently location independent, they use a remote programming
model. Method parameters and return values are serialized over RMI-IIOP and returned
by value. This is the intrinsic RMI "Call By Value" model.
WebSphere provides the "No Local Copies" performance optimization for running EJBs
and clients (typically servlets) in the same application server JVM. The "No Local
Copies" option uses "Call By Reference" and does not create local proxies for called
objects when both the client and the remote object are in the same process. Depending
on your workload, this can result in a significant overhead savings.
Configure "No Local Copies" by adding the following two command line parameters to
the application server JVM:
* -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util
* -Dcom.ibm.CORBA.iiop.noLocalCopies=true
CAUTION: The "No Local Copies" configuration option improves performance by
changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM.
One side effect of this is that the Java object derived (non-primitive) method parameters
can actually be changed by the called enterprise bean. Consider Figure 16a:
Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations?
Thanks
The only automatic optimization that can really be done for remote EJBs is if they are colocated (accessed from within the same JVM). In that case, the ORB will short-circuit some of the work that would otherwise be required if the request needed to go across the wire. There will still be some necessary ORB overhead including object serialization (unless you turn on noLocalCopies, with all the caveats it brings).
Alternatively, if you know that the UI controller is colocated, your method calls do not rely on parameter or return value copying, and your interface does not rely on the exception differences between local and remote views, then you could create and expose a local subinterface that will be much faster than remote access through the ORB.