Multiple Docker Containers on single server - apache

We are planning to host different docker containers with different liferay web applications in single Ubuntu server with static IP and point them to different websites.what is the easy setup process.

AFAIK you have two choices:
(Easy) Listen different port for each web application:
yoursite.com:8080 --> some web app
yoursite.com:9090 --> another web app
(A bit more work) Use a virtual host approach, where you have just one proxy service listening (maybe port 80). Then configure a subdomain for each of your services and point them to same server and forward the requests in proxy service according to domain names:
app-a.yoursite.com --> localhost:8080 --> some web app
app-b.yoursite.com --> localhost:9090 --> another web app

Related

How to access a web application via (Server Name Indication) SNI?

I have a requirement to host multiple applications on same public IP and port. I'm new to this area and I figure out that SNI can be used to achieve my requirement. I decided to use Microsoft application gateway as the load balancer. I can configure 2 apps with 2 SSL certificates. My question is how can i access it via browser ? ex: if server FQDN is www.example.com, Since there are 2 applications running in it. how can I mention which application to load ?.
Each certificate needs to be associated with a specific FQDN for one application. Since you have 2 applications on the same IP and TCP port, you need to create two FQDN (i.e. www.my1stappli.mydomain.com and www.my2ndappli.mydomain.com), generate two certificates (one for each FQDN) and configure the Azure Application Gateway to handle each application with its own certificate. If you have only one virtual machine to handle those 2 applications, configure the Azure Application Gateway to redirect one application to port 80 of your virtual machine and to redirect the other application to port 81 of the same virtual machine.
Thus,
https://www.my1stappli.mydomain.com will be redirected to port 80 of your virtual machine
and https://www.my2ndappli.mydomain.com to port 81 of the same virtual machine

Asp.net core application is not accessible from an external load balanced Azure VM

I have created a VM behind an external load balancer in Azure and I am using IIS as the reverse proxy webserver to host the asp.net core application.
I am able to access the application inside the VM using localhost but not able to access the same from my client machine through the public ip configured for the loadbalancer.
I have configured loadbalancing rules for incoming traffic on port 80 and port 443 for the loadbalancer and specified appropriate NSGs for those ports.
Before deploying the asp.net core application I was able to access the defaultwebsite from my client machine. so I assume that inbound rules are taken in to account and working fine.
This is a self contained application and since I am able to access the application inside the VM through localhost I assume that the aspnet hosting module and other configuration required is proper.
Please let me know if there is anything else I can be missing.
I guess i have figured out what the issue is.
The Loadbalancer probe for the application is configured to be Http since its a webserver and is instructed to check at the default path "/" and since the application i have created does not serve anything on "/" its considering the node as unhealthy and does not respond or serve anything.
I changed the probe to tcp and it works just fine.
Thanks,
Teja

Use Apache virtual hosts to access local servers?

I was wondering if it's possible to use Apache to request websites on a local network, with apache being the gateway so to speak? On my home network I currently have a Windows box running an ASP.NET site, it has to run under Windows/IIS, a server I'm not particularly fond of, but I can live with it... Alongside this I'm thinking about running an Apache server on a separate machine, for my PHP applications, as well as some other applications (e.g. Plex).o
Ideally I'd like to have Apache on port 80, listening for requests, and using the sort of functionality I have with a virtual hosts file to load content from another webserver on my network, that isn't directly accessible through it's own port. I know I could just run PHP under IIS, or move one server to another port, but there's no fun in that!

How to host multiple HTTPS enabled sites with unique SSL-certificates on single Service Fabric cluster

I have Service Fabric cluster on Azure. I would like to use this cluster to host multiple ASP.NET Core based sites. All sites have to be accessible on Internet via HTTPS (on port 443). Also each site operates on different domain thus having unique SSL certificates. Some sites even have wild card certificates.
I've learned that using WebListener is the recommended way to host ASP.NET Core based sites on Service Fabric. As far as I know WebListener should support binding multiple sites to the same port by using the request HTTP headers to recognize the requested site. This is cool, but I have not found information on how to bind the SSL certificates to the sites (hostname). Is it even possible?
If it's not possible to bind certificates to the specific site when using WebListener, I don't know of any practical way of achieving this.
Does somebody have an idea how to solve this issue in a manner that is practical for adding new sites to the cluster with minimal work and expense (performance or infrastructure cost)?
I guess one way would be to use unique port for each site and then doing work on Azure Load Balancer and/or Application Gateway. This could get a bit complicated to manage and even costly (public IPs and application gateway aren't exactly free).
So having just spun up a new ASP.Net Core WebSite I can see that the program.cs file contains a specific ICommunicationListner implementation for .Net Core. I would modify the following method on that listener to allow you to specify an app root, similar to what the default Owin communication listener does for WebAPI. This would allow you to bind multiple sites to the single port.
Task ICommunicationListener.OpenAsync(CancellationToken cancellationToken)
{
var endpoint = FabricRuntime.GetActivationContext().GetEndpoint(_endpointName);
string serverUrl = $"{endpoint.Protocol}://{FabricRuntime.GetNodeContext().IPAddressOrFQDN}:{endpoint.Port}";
_webHost = new WebHostBuilder().UseWebListener()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup()
.UseUrls(serverUrl)
.Build();
_webHost.Start();
return Task.FromResult(serverUrl);
}
The change may look like this:
string serverUrl = $"{endpoint.Protocol}://+{endpoint.Port}/{this.appRoot}";
Then within the service manifest file tweak the endpoint configuration to run on https and 443
<Endpoints>
<Endpoint Protocol="https" Name="ServiceEndpoint" Type="Input" Port="443" />
</Endpoints>
And then in the Service Fabric application manifest add in the certificate (which should already be deployed to the VMS) using the thumbprint to identify which certificate to use, like so.
Then still within the application manifest add in a policy to bind that certificate to the endpoint for your service
<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="Web1Pkg" ServiceManifestVersion="1.0.0" />
<ConfigOverrides />
<Policies>
<EndpointBindingPolicy EndpointRef="ServiceEndpoint" CertificateRef="Cert1" />
</Policies>
</ServiceManifestImport>
EDIT:
Fixed typo in service url (approot was in wrong location) and replaced default kestrel extension with weblistener.
EDIT 2:
Updated the service url to use wild card due to how binding works with the Web Listener
As you already said, one way to do it (we do it with 3 domains), is to have X public IP addresses. Each of those is forwarded to an internal port (different for each) that has a website listing on that port (or one app listening on multiple ports). Then you can assign SSL and such to each of those.
PublicIP1:443 -> Port 447 -> Webapp listening on port 447
PublicIP2:443 -> Port 448 -> Webapp listening on port 448
PublicIP3:443 -> Port 449 -> Webapp listening on port 449
All in all, using ServiceFabric just as a hosting solution for websites is probably not something I would do. If I had to host multiple (read many) websites, I'd do that on Azure App Service. If I then had need for processing/persistence of data on scale, I'd look at maybe using SF for that. But that doesn't mean that the websites has to run on SF.

IBM HTTP Server configured to communicate with websphere to serve http/https

I have 2 IBM HTTP servers with ip's 10.10.10.2 & 10.10.10.3 with http(port 80) and https(port 443). I have also WAS on 10.10.10.4 with http(port 80) and https(port 443). Now, I have to setup the two http servers with single domain name and forward http/https requests of dynamic contents to whebsphere.
I don't know how to do that. Can anyone help me about that with an example or decent document??
I read about virtual hosts and also about http-plugin but i couldn't understand the difference or what is the specific use of each?
HTTPplugin is a WebSphere component that allows the Web Servers to communicate with the WAS Server.
VirtualHost is a configuration inside WAS.
When you deploy any web app, you will associate them with a Virtual Host.
Virtualhost is a collection of supported IP & Port numbers.
In your case, you have a domain name (say test.abc.com that receives requests at 80 and 443)
Create a virtual host that contains two entries
test.abc.com:80
test.abc.com:443
When you deploy a Web App, then associate it with this virtual host.
Generate the generic plugin (i am assuming you have not defined a Web Server configuration in WAS) and copy the generated plugin files to the Web Servers.
The HTTP Plugin would use this plugin file and route requests for Web Apps to the Application Server.
This article is very old but the basics mentioned here still hold true
http://public.dhe.ibm.com/software/dw/wes/pdf/WASWebserverplug-in.pdf
HTH
Manglu