AJP Connector or HTTP Connector - apache

We have a web application (3rd party product) hosted in Tomcat 6x server. We will be installing a IBM HTTP Server as Web Server in-front of the Tomcat server. While doing this, the product vendor has asked us to use HTTP connector (instead of AJP Connector) for communication between Tomcat & the IHS Web Server.
The few articles i read seems to be pointing that,
an AJP connector will provide faster performance than proxied HTTP.... It is otherwise functionally equivalent to HTTP clustering.
1. Apart from performance, are there any other reasons when we should go for an AJP Connector and when we should go for an HTTP Connector ?
2. Are there any other side effects because of this choice of HTTP Connector, instead of AJP Connector ?
Note: Our application has approx 80 concurrent users during peak time.

AJP permits the proxy to tell the backend about client SSL certificate details, which in Java EE are used to satisfy some HTTPServletRequest APIs.
You shouldn't use either in IHS, though, with an application server IHS wasn't bundled with. You'll have no support, and the generic proxy support is not really maintained actively.

Related

What makes nginx/apache a web server, HAProxy not?

What makes nginx/apache a web server, HAProxy not?
What functionalities HAProxy lacks to be a web server?
HAProxy can listen on port 80 and can speak HTTP but that's not what people mean when they say "web server."
HAProxy is not a web server, because "web server" implies an HTTP endpoint that can serve static content from files and/or dynamic content generated from code. That's not what HAProxy is for.
Technically, there are certain capabilities in HAProxy that can be misused to emulate some capabilities of a web server -- you can serve very small static files from memory buffers and you can generate small dynamic responses using the optional embedded Lua interpreter -- but it is not intended or designed to be used as a web server. It's a proxy server -- emulating a web server toward the client, and emulating a client toward the real back-end web server(s) behind it -- because bidirectional emulation is commonly what proxies do.
With Nginx and Apache, you can specify a root directory from which files are served, and you can specify paths that are to be serviced by code running in languages like Perl, PHP, Python, etc. Not with HAProxy, because, again, that isn't what it's designed to do.
Both Nginx and Apache can also be used as proxy servers, as HAProxy can, but HAproxy is specifically designed and optimized for that primary purpose -- proxying and load balancing against multiple back-end, selecting the back-end using various rules and algorithms... in essence, HAProxy is an "intermediate router" for HTTP requests, delivering them rather than responding to them. It can also proxy and load balance non-HTTP protocols that rely on TCP.

Enable Geode REST to use HTTP and HTTPS at the same time

If we set Geode properties to use ssl for web then that means we have to use HTTPS for all web traffic. Is there a way to configure Geode, for development purposes, to use both HTTP on 1 port (8080) but also HTTPS on another (8443) ?
It looks like Jetty can be configured to allow both using multiple connectors, even on the same port...
Unfortunately that isn't possible at the moment. I'd suggest trying to start different instances of the various components (locator and server) with different SSL settings (off or on) for testing purposes.

Is there a solution for AJP proxied websocket connections?

I'm currently using an AJP proxy through apache to tomcat 8. I don't want to reason why I'm using AJP, but the basics are that Apache site outside the firewall while tomcat is inside the firewall with multiple apps being virtual hosted through the one apache instance.
A component to the app has been added with the need for websockets. I know that our current AJP implementation will not support websockets, however I'm looking for an alternative that someone else has confirmed working, i.e. different apache module, I'm using mod_proxy_ajp.
If there is no known module to allow this to work does anyone know of any works in progress for an enhancement to any of the existing modules or a new module?
FWIW I'm using spring4 websocket support with a STOMP endpoint and SockJS.
At the time of your question there is no solution for WebSocket support via AJP.
Apache does have mod_proxy_wstunnel but this support proxying of WebSocket using the HTTP protocol itself to the backend server. AJP work differently.
.
See this tomcat mailing list item for some useful background:
https://mail-archives.apache.org/mod_mbox/tomcat-users/201408.mbox/%3C53FF3A3A.3040507#christopherschultz.net%3E

HTTPS node app on Cloud Foundry

Is it possible to deploy a node.js app on Cloud Foundry that listens for HTTPS requests on port 443?
I can find various references to SSL support in the Cloud Foundry forums, but no actual examples of HTTPS apps. The article "Setup SSL on cloudfoundry landscape" seems to indicate that I need to install nginx and use that, but there is not really enough information there to tell me what I need to do.
The SSL connection will terminate at the loadbalancer and then forward the unencrypted HTTP connection to your node app.
Just use https://your-app.cloudfoundry.com instead of http://...
You don't need nginx in particular, but you do need something capable of listening to a port (which Cloud Foundry will assign at the moment, indicated by the environment variable PORT or, for older versions of Cloud Foundry, VCAP_APP_PORT). So nginx will work for this purpose, but if you have made a node.js app, the core module http (optionally paired with express) would be a more natural choice of webserver.
Now if your app requires ssl, you'd think that you'd need to configure your webserver (nginx, express, etc.) for HTTPS, but you do not need to do so because Cloud Foundry handles the SSL and passes the decrypted HTTP to your webserver.
So if you are using node.js core modules, use the http, not https module.

apache to tomcat: mod_jk vs mod_proxy

What are the advantages and disadvantages of using mod_jk and mod_proxy for fronting a tomcat instance with apache?
I've been using mod_jk in production for years but I've heard that it's "the old way" of fronting tomcat. Should I consider changing? Would there be any benefits?
A pros/cons comparison for those modules exists on http://blog.jboss.org/
mod_proxy
* Pros:
o No need for a separate module compilation and maintenance. mod_proxy,
mod_proxy_http, mod_proxy_ajp and mod_proxy_balancer comes as part of
standard Apache 2.2+ distribution
o Ability to use http https or AJP protocols, even within the same
balancer.
* Cons:
o mod_proxy_ajp does not support large 8K+ packet sizes.
o Basic load balancer
o Does not support Domain model clustering
mod_jk
* Pros:
o Advanced load balancer
o Advanced node failure detection
o Support for large AJP packet sizes
* Cons:
o Need to build and maintain a separate module
If you wish to stay in Apache land, you can also try the newer mod_proxy_ajp, which uses the AJP protocol to communicate with Tomcat instead of plain old HTTP, but which leverages mod_proxy to do the work.
AJP vs HTTP
When using mod_jk, you are using the AJP. When using mod_proxy you will use HTTP or HTTPS. And this is essentially what makes all the difference.
The Apache JServ Protocol (AJP)
The Apache JServ Protocol (AJP) is a binary protocol that can proxy inbound requests from a web server through to an application server that sits behind the web server. AJP is a highly trusted protocol and should never be exposed to untrusted clients, which could use it to gain access to sensitive information or execute code on the application server.
Pros
Easy to set up as the correct forwarding of HTTP headers is not required.
It is less resource intensive because the TCP packets are forwarded in binary format instead of doing a costly HTTP exchange.
Cons
Transferred data is not encrypted. It should only be used within trusted networks.
Hypertext Transfer Protocol (HTTP)
HTTP functions as a request–response protocol in the client–server computing model. A web browser, for example, may be the client and an application running on a computer hosting a website may be the server. The client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client. The response contains completion status information about the request and may also contain requested content in its message body.
Pros
Can be encrypted with SSL/TLS making it suitable for traffic across untrusted networks.
It is flexible as it allows to modify the request before forwarding. For example, setting custom headers.
Cons
More overhead as the correct forwarding of the HTTP headers has to be ensured.
More resource intensive as the request is fully parsed before forwarding.