If I have two directories on an nfs server, between which I would like to copy a large amount of data (in several thousand files, rather than one large block), is there any way to optimize this to be a "local" copy on the server? Does NFS do this automatically, and if not, is there an option to enable it to do so, or is there some inevitable hit on the client? sshing into the nfs server is not an option, sadly - the nfs mount is the only access I have to it.
No, NFS doesn't do this, unfortunately. There is no provision in the protocol for the source of the copy to know anything about the destination or vice versa.
Without ssh or similar access, you can't do anything other than drag every byte across the network to the client, and send it back across the network to the server, a block at a time.
You might get some speed-up if you can use tar or dd or some other command where you can modify the block size. But I wouldn't bet on it.
Related
I am currently using Unison across 2 computers (server and laptop). I need to create another connection where I can timely backup my data from server.
laptop <--> server -> backup
Here the connection to backup from server can be unidirectional. Is there any way to accomplish this?
This is a very common thing to do. When setting up Unison across multiple machines, you should prefer a star topology. So if you can, you should run another instance of Unison, along with any backup-related scrips, on your machine where you are storing your backups. It should look about the same as your setup on your laptop (depending on where your backups are being stored).
I need to clear a concept. I have two redis servers running on a single VM. Server#1 connects via TCP, server#2 connects via a UNIX socket. I'm on the cusp of converting the TCP server to UNIX as well.
An excerpt from the init.d script for server#1 is:
DAEMON=/usr/bin/redis-server
DAEMON_ARGS=/etc/redis/redis.conf
NAME=redis-server
DESC=redis-server
RUNDIR=/var/run/redis
PIDFILE=$RUNDIR/redis-server.pid
The comparable excerpt from the init.d script for server#2 is (which has its own config):
DAEMON=/usr/bin/redis-server
DAEMON_ARGS=/etc/redis/redis-2.conf
NAME=redis2-server
DESC=redis2-server
RUNDIR=/var/run/redis
PIDFILE=$RUNDIR/redis2-server.pid
Both servers are currently up and running. My question is: how come DAEMON is kept the same for both servers? Why wasn't a separate executable needed?
I configured the two servers using config from various internet forums. While it works, I've failed to understand the significance of the DAEMON value, given it remains the same for both server instances. Is it because the executable is fed different config files, and this the same DAEMON is able to handle multiple server instances? Being a beginner, I'd really like some expert opinion on this. Thanks in advance.
Open terminal (or cmd). Now open it again. You have two copies open, but they are both using the same executable.
You're doing the same with redis: DAEMON is just saying where to find the program, and since you're happy to use the same version of redis for both, you can use the same path for both DAEMON values, each instance of which has its own ID stored in the PIDFILE, which is why they need to be different paths or they will interfere with each other.
I have a weblogic 12 cluster. Files get pushed to it both through http forms and through scp to a single machine on the cluster. However I need the files on all the nodes of the cluster. I can run scp myself and copy to all parts of the cluster, but I was hoping that weblogic supported the functionality in some manner. I don't have a disk shared between the machines that would make this easier. Nor can I create a shared disk.
Does anybody know?
No There is no way for WLS to ensure a file copied on one instance of WLS is copied to another. Especially when you are copying it over even using scp.
Please use a shared storage mount, so that all managed servers can refer to this location with out the need to do SCP.
I'm trying to architect a system on GCP for scalable web/app servers. My initial intention was to have one disk per web server group hosting the OS, and another hosting the source code + imagery etc. My idea was to mount the OS disk on multiple VM instances so to have exact clones of the servers, with one place to store PHP session files (so moving in between different servers would be transparent and not cause problems).
The second idea was to mount a 2nd disk, containing the source code and media files, which would then be shared with 2 web servers, one configured as a CDN server and one with the main website and backend. The backend would modify/add/delete media files, and the CDN server would supply them to the browser when requested.
My problem arises when reading that the Persistent Disk Storage is only mountable on a single VM instance with read/write access, and if it's needed on multiple instances it can be mounted only in write access. I need to have one of the instances with read/write access with the others (possibly many) with read only access.
Would you be able to suggest ways or methods on how to implement such a system on the GCP, or if it's not possible at all?
Unfortunately, it's not possible.
But, you can create a Single-Node File Server and mount it as a read and write disk on other VMs.
GCP has documentation on how to create a single-Node File Server
An alternative to using persistent (which as you said, only alows a single RW mount or many read-only) is to use Cloud Storage - which can be mounted through FUSE.
How is Apache in respect to handling the c10k problem under normal conditions ?
Say while running very small scripts with little data, or do I need to scale out if I use Apache?
In the background heavy lifting is done by a few servers running specialized software that processes the requests but I'd like to use Apache as a front. Is this a viable plan?
I consider Apache to be more of an origin server - running something like mod_php or mod_perl to generate the content and being smart about routing to the appropriate system.
If you are getting thousands of concurrent hits to the front of your site, with a mix of types of data (static and dynamic) being returned, you may find it useful to put a more optimised system in front of it though.
The classic post-optimisation problem with Apache isn't generating the dynamic content (or at least, that can be optimised for early in the process), but simply waiting for a slow client to be able to receive the bytes that are being sent. It can therefore be a significant advantage to put a reverse proxy, in the form of Squid or Nginx, in front of the servers to take over the 'spoon-feeding' of the slow network clients, while allowing the content production to happen at full speed, and at local network speeds - 100Mb/sec or even gigabit speeds - if it even has to traverse a network at all.
I'm assuming you've probably seen this data, but if not, it might give you some idea.
Guys, imagine that you are running web server with 10K connections (simultaneous). How could it be?
You've got many many connections per second
Dynamic content
Are you sure that your CPU can handle that many PHP sessions for example? I guess no, so why are you thinking about C10K problem? :D
Static content - small files
And still soo many connections? On single server? Probably you've got problems with networking/throughput too or you are future competitor of Google. Use lighttpd which addresses C10K problem and is stable - fly light. Using Apache for only static files for large sites is obvious.
Your clients are downloading large files for a large time - static content
ISO images, archives etc
If you are doing it via web server - FTP may be more appropriate.
Video streaming
Use lighttpd or specialized software. And still... What about other resources?
I am using Linux Virtual Server as load balancer in front of apache servers (with specific patches for LVS-NAT) and I am happy :) This string is an answer you want to hear.