We are a university IT organization that hosts all of the university's websites on several shared servers on our server room floor. We have several VMs, each running its own instance of Apache as a web server for each respective server.
If we were going to setup a memcached server, is it feasible to use it as a shared instance?
If shared by several servers, or even multiple web apps running on the same server, what's the best way to keep each app's cache stores separate? Prefix the key?
Would each VM require its own instance of memcached, or could we setup 1 memcached server and allow our multiple VMs to read/write to it?
We wrote bucket engine specifically to allow for a large number of memcached virtual instances running under a single process.
Related
I want to create a weblogic cluster that has two managed servers each running on a physically separated remote machine
According to weblogic docs
All Managed Servers in a cluster must reside in the same domain; you
cannot split a cluster over multiple domains.
Ref: https://docs.oracle.com/cd/E24329_01/web.1211/e24970/understand_domains.htm#DOMCF125
If this is the case then where am I suppose to create the Managed Server on the remote machine. Since the managed server can only be created in the domain, am I not suppose to create the domain on the remote machine for holding managed server?
[edit]
As per the below documentation
https://docs.oracle.com/cd/E17904_01/web.1111/e14144/tasks.htm#WLDPU136
It seems that the admin server domain is replicated on remote managed servers using pack and unpack commands.
That means a separate copy of domain must be made available on remote machines in order to operate managed servers on it.
Is it the fault with the oracle documentation-
Because then its the violation of the Domain Restrictions rule which says that there should be only one domain per cluster?
Domain is logical group for all Weblogic resources like relam, cluster, manged servers. You can create managed servers on physically separated remote machine and group them in a same Weblogic domain.
In a WebLogic Server domain there is always one administration server. This special instance of WebLogic Server is responsible for the configuration of the entire domain. Other servers in the domain are called managed servers. These are typically the servers on which you run your applications. A domain can contain any number of managed servers. You can find the detail on this link -
https://docs.oracle.com/cd/E17904_01/web.1111/e14144/tasks.htm#WLDPU136
is it possible to restrict public access to an apache web server in a way similar a ssh-server can restrict access via public keys?
Setting:
I've got a micro server with apache and a web-application. This application needs to be accessed only by less than 10 users. I want to exclude all others, esp. bots, hackers, etc.
Here is an idea: Use ssh tunnelling and firewall the webserver so it only accepts local addresses.
On my Windows server, I will be hosting a few unrelated websites that I would like to add the features of OSQA to. As such, there will be no shared data between the OSQA instances.
Is it possible to have multiple OSQA instances running off the same database (I'm guessing if it's not supported, some db and script tweaking would be required to ID the requesting site), or alternatively (and probably the simplest), having several OSQA instances running on the same box?
I have taken a look at the Bitnami OSQA stack, and this may be the simplest solution. However, this installs Apache, so I wouldn't want multiple instances of Apache running on my box either.
I would also like to be able to access these instances through IIS.
You should be able to install different OSQA instances on the same database server but you will need to create different databases (in that database server) for each instance. Unfortunately currently we BitNami) don't support IIS nor multiple OSQA installations on the same Apache server so you will need to do it manually.
I have a cluster of EC2 servers spun up with Ubuntu 12.04. This will be a dev environment where several developers will be ssh-ing in. I would like to set it up where the /home directory is shared across all 4 of these servers. I want to do this to A) ease the deployment of the servers, and B) make it easier on the devs so that everything in their homedir is available to them on all servers.
I have seen this done in the past with a NetApp network attached drive, but I can't seem to figure out how to create the equivalent using AWS components.
Does anyone have an idea of how I can create this same setup using Amazon services?
You'll probably need to have a server host an NFS share to store the home directories. I'd try out what this guy has done in his answer https://serverfault.com/questions/19323/is-it-feasible-to-have-home-folder-hosted-with-nfs.
I have 2 VPSes on linode and 1 node balancer. 1 DB Vps and 1 App Vps. I send things through my nodebalancer so it's easier to scale out the app servers eventually.My question is 2 fold:
Do you guys suggest I just drop memcached and use redis for caching? I'm using redis for other things (such as resque) and this would make my infrastruture that much simpler.
I also intend to use redis as a session store. When I'm ready to add another application server vps, is it possible to make it seem like 1 virtual redis store across the 2 VPSes? This is also important for caching.
Also do you suggest I install redis, elastic search, monit on each VPS or should I have a separte VPS for these services and another one for my Application server?
Many thanks.