Weblogic domain for physically separated managed servers - weblogic

I want to create a weblogic cluster that has two managed servers each running on a physically separated remote machine
According to weblogic docs
All Managed Servers in a cluster must reside in the same domain; you
cannot split a cluster over multiple domains.
Ref: https://docs.oracle.com/cd/E24329_01/web.1211/e24970/understand_domains.htm#DOMCF125
If this is the case then where am I suppose to create the Managed Server on the remote machine. Since the managed server can only be created in the domain, am I not suppose to create the domain on the remote machine for holding managed server?
[edit]
As per the below documentation
https://docs.oracle.com/cd/E17904_01/web.1111/e14144/tasks.htm#WLDPU136
It seems that the admin server domain is replicated on remote managed servers using pack and unpack commands.
That means a separate copy of domain must be made available on remote machines in order to operate managed servers on it.
Is it the fault with the oracle documentation-
Because then its the violation of the Domain Restrictions rule which says that there should be only one domain per cluster?

Domain is logical group for all Weblogic resources like relam, cluster, manged servers. You can create managed servers on physically separated remote machine and group them in a same Weblogic domain.
In a WebLogic Server domain there is always one administration server. This special instance of WebLogic Server is responsible for the configuration of the entire domain. Other servers in the domain are called managed servers. These are typically the servers on which you run your applications. A domain can contain any number of managed servers. You can find the detail on this link -
https://docs.oracle.com/cd/E17904_01/web.1111/e14144/tasks.htm#WLDPU136

Related

How to achieve high availability for Active Directory LDAPS (Secure LDAP)

We have around 50 applications currently configured with LDAP and we have around 20 Domain Controllers. As per the security best practice we have to migrate all these applications from LDAP to LDPAS.
Currently, all applications are connected using Domain's "NETBIOS" name so there no need to worry about high availability.
What is the best design approach to achieve high availability for LDAPS?
Prefer not to configure individual DC servers as LDAPS servers in the application.
Note: all the servers (DC and application servers) are enrolled in on-prem PKI.
In my enterprise environment, there is a load balancer with a virtual IP which distributes traffic accross multiple DCs. Clients access ad.example.com, and each DC behind ad.example.com has a cert valid both for hostname.example.com and ad.example.com (SAN, subject alternative name). This has the advantage of allowing the load balancer to manage which hosts are up -- if a target does not respond on port 636, it is automatically removed from the virtual IP. When the target begins responding, it is automatically added back. LDAP clients don't need to do anything unusual to use this high availability AD LDAPS solution. The down side is that the server admin has ongoing maintenance as DCs are replaced -- we build a new server and then remove the old one. In doing so, the old IP is retired. The new IP needs to be added to the load balancer virtual IP config.
Another approach would be to use DNS to find the domain controllers -- there are SRV records registered both for the Site domain controllers and all domain controllers. Something like _ldap.tcp.SiteName._sites.example.com will give you the DCs in example.com's SiteName site. For all DCs in the example.com domain, look up _ldap._tcp.example.com ... this approach, however, requires the LDAP client to be modified to perform the DNS lookups. The advantage of this approach is that the DCs manage their DNS entries. No one needs to remember to add a new DC to the DNS service records.

Can you create Kerberos principals where the hostname is flexible? (Docker)

I'm specifically trying to do this with Apache Storm (1.0.2), but it's relevant to any service that is secured with Kerberos. I'm trying to run a secured Storm cluster in Docker. There are a number of out-of-the-box docker images out there for Storm, and they work great unsecured. I'm using https://github.com/Baqend/docker-storm. I also have Storm running securely on RHEL VM's.
However, my understanding is that Kerberos ties hostnames to principals, so if I'm making service foobar available to clients, I need to create a principal of foobar/hostname#REALM. Then a client service might connect to hostname with principal foobar, Kerberos will look up foobar/hostname#REALM in its database, find that it's there (because we created a principal with exactly that name), and everything will work.
In my case, it's described here: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/configure_kerberos_for_storm.html. The nimbus authenticates as storm/<nimbus host>#REALM, and the supervisors and outside clients authenticate as storm/REALM. Everything works.
But here in 2017, we have containers and hostnames are no longer static. So how would I Kerberize a service that runs in Docker Data Center (or Kubernetes, etc)? I have to attach an unknown hostname to the server authentication. I imagine I could create a principal for all possible hostnames and dynamically pick the right one at startup based on where the container lives, but that's kludgy.
Am I misunderstanding how Kerberos works? Is there a solution here that I don't see? I see multiple examples online of people running Storm in Docker, but I can't imagine that nobody's clusters are secure.
I don't know Apache Storm or Docker, but based on previous workings with JBOSS in a cluster in which an inbound client could be connecting to any one of a possible number of different hosts, then you would simply assign a virtual name to the entire pool at the load balancer and kerberize the service according to the virtual name instead of individual host name at the host level. So if you're making service foobar available to clients, you need to create a service principal (SPN) of foobar/virtualhostname#REALM in your Directory to kerberize the service with. You assign that SPN to a user account (not a computer account) to give it the flexibility to work with any Kerberized service which uses that SPN. If you are using Active Directory, you must create a keytab with the SPN inside of it, and place the keytab on each host running the kerberized service instance foobar/virtualhostname#REALM.

SQL Availability Group Listeners in Windows Azure

We have a staging and production SharePoint farm housed within Windows Azure. All servers run Windows Server 2012. We're having the same issues in both environments, but for this question, I'll focus on the staging environment.
For the staging environment, I have several servers within the SharePoint farm and 2 SQL servers. All servers are located on the same subnet and affinity group. There is a DHCP server that hands out 192.168.X.X addresses for all servers on the subnet.
I've created a WSFC with both SQL servers as nodes. I've tried creating the cluster with an IP of an unused DHCP address (192.168.X.X) and with a link local address (using a PowerShell script to create the cluster found online from Microsoft). In both cases, the cluster IP is not accessible from any machine on the subnet. However, in both cases, the cluster appears to be up and restarting the active node pushes the passive node to the new active node. I think that this may be one of my root problems.
My final goal is to create an SQL Availability Group Listener for SharePoint to use for DB connections. With the cluster created, I am able to create an Availability Group in SQL Management Studio. I can see that it works: when rebooting the primary replica, the secondary turns to primary, all DBs are synced and up to date, etc. However, when I try to create the AG Listener, it fails with an error claiming that it cannot access the cluster or the cluster is not active.
I've read a lot online. Some claim that it's not possible to create AGs in Azure, others claim that this hotfix fixes things (http://support.microsoft.com/kb/2854082), and a few that claim it works when you set the Listener IP to the public endpoint. I've tried them all and haven't had any success. There's got to be some way to increase the reliability of SQL in a totally enclosed, Azure environment. Does anyone have any experience with this? Has anyone gotten it to work? If so, how did you do it? If not, is there another way to go about SQL availability?

How to test weblogic cluster servers?

I have created a cluster with 2 servers and I have developed a sample application. I can access this application from ip address of these servers(10.0.0.3:7002/sample/ and 10.0.0.4:7002/sample/) but I don't know this cluster is working or not. Can I access this web application from a single address? like myclusteraddress:7002/sample/.
You can accomplish this task in two ways...
First Way
You need to create a Load balancer(F5) to the both the servers which automatically manages the traffic and serves the user requests...
Second Way
You have to a dns cutover for that website...it's same as above task almost.

Memcached in a trusted shared environment?

We are a university IT organization that hosts all of the university's websites on several shared servers on our server room floor. We have several VMs, each running its own instance of Apache as a web server for each respective server.
If we were going to setup a memcached server, is it feasible to use it as a shared instance?
If shared by several servers, or even multiple web apps running on the same server, what's the best way to keep each app's cache stores separate? Prefix the key?
Would each VM require its own instance of memcached, or could we setup 1 memcached server and allow our multiple VMs to read/write to it?
We wrote bucket engine specifically to allow for a large number of memcached virtual instances running under a single process.