Does NFS download files to the client or does the client access them remotely on the NFS server? - nfs

I've been studying NFS and what I don't understand is this: after the client receives the filehandle from the server (all the way at the end of the whole NFS/mountd/NFSd etc. communication process) is the file data then written somewhere on the client? And then the client reads/writes to that file on the client and then sends it back over the network to the server? Or is the client reading and writing to this file on the server over the network? Thanks!

As name says NFS (network File system) means accessing the files residing on the server. So every client NFS request either READ/WRITE will fetch the data from server over the network. Usually all NFS client implementaions will use some File caching/data caching mechanism. Once the data is read from server, it could store the data in its own cache (like buffer cache etc) for subsequent reads so as to improve the performance. As long as the client cache is valid, it doesn't need to go and fetch the data from server again and again.

Related

file upload to SFTP server from a jsp page

I have a webpage which contains the button to upload a file. My requirement is, when user chooses the file to upload and click the submits it, the file should get transferred to an SFTP server. My question is, do I need a SSH client installed on the client machine for achieving this?
I thought of uploading it to my http server as a temp file first and then to the SFTP server from there, but then what's the purpose of SFTP on the first place as the file will be transmitted to server as unencrypted.
JSP is a server side technology. If JSP is going to be involved, then the code has to run on the server.
My question is, do I need a SSH client installed on the client machine for achieving this?
No. The server has to do the work.
I thought of uploading it to my http server as a temp file first and then to the SFTP server from there
That's how you would have to do it.
There's no way to interact with the SFTP protocols directly from client side code in a webpage.
then to the SFTP server from there, but then what's the purpose of SFTP on the first place
Good question: But you decided to use that technology, so that's up to you.
as the file will be transmitted to server as unencrypted.
To secure communications between the browser and the HTTP server, use HTTPS instead of plain HTTP.

Does proxy have to load everything completely first before sending them back?

I've been using proxy services, and I want to know some details behind it, regarding its speed and efficiency. Consider following scenario:
There's a mp3 file on server M, a client wants to download that file, but he doesn't want to expose himself, so he decides to use a proxy website to download. The get mp3 request is therefore send to proxy server P first, then proxy server would get that mp3 for the client, here's my question about some details:
Does P have to download the entire mp3 file first before it can pass it to the client? If so, the file is downloaded twice (first on proxy server, then on client's machine) , taking about twice amount of time?
Proxies normally operate in two modes: HTTP and Connect.
The Connect mode is for blackbox protocols like HTTPS or ftp. Where most of the data is meaningless octet streams. Because they are encrypted or unstructured files.
However, for HTTP, proxies are pretty smart. One of the things that they do is caching stuff. Like images and web page contents when you are downloading a website in your browser via proxy.Moreover, for octet streams under HTTP, proxies show the connect behavior, meaning that they open a relay socket and let you download the content. In the meanwhile, they will store it locally, and if it doesn't exceed a certain size the file will be also cached.
The files are also forwarded, or relayed, or sometimes called rewrited. This here is a sample config file that shows squid configured to forward Youtube videos and not caching them.
Another reason why downloading and forwarding is not an option is doubling the round trip time (RTT). It is really counterintuitive when you add another RTT that slows down a HTTP session.

Load balancing server

I would like to know about load balancing servers.
I am having an application which is having load balanced server.
When i made some changes to the data, in my application how it is taking effect?
Also, when we restart the application , what are all the steps that are happening, to a load balanced server?
well, the load balancer is separate from the application code, basically it is just routing the requests to one of a number of set up servers (a.k.a. downstream servers, for instance web application servers, apache/nginx+php, etc) that handles the actual request. So to update the application (i.e. java servlet, JSP, PHP page, static HTML page, image, etc) all the downstream servers will have to be updated. As for data (i.e. articles, user database, etc) this will usually be stored in a database that all the downstream servers connect to
As for restarting the application, when you do that on each of the downstream servers it will temporarily be unable to service requests, the load balancer will thus get an "unable to connect" issue when trying to send requests to the server with the application being restarted, and will then try to send the request to the next server in the list of downstream servers. Depending on how the load balancer is set up it will automatically retry sending new requests to the previously restarted server and when the restarted downstream server is up again it will again service requests. So to update the applications you basically just update one downstream server at the time, as the other servers take over the load while it is restarted it will be no downtime, and the clients will be none the wiser
Is this a hardware appliance or at server running HAProxy/nginx/other?

HTTPS or SSH for server-server communication

I have two servers. I wish to send some data ( was doing it with HTTP GET till now ) to a php file residing on the server and get some output from it.
Of late, I saw the requests per second went up to 50 and Apache served HTTP 500 error for some of those. This server has 512 MB RAM and the script, in php-cli mode, usually eats up around 10 MB of memory.
I wish, if it were to reduce the load on server, to use SSH instead of HTTPS. Will it reduce the memory usage on this server (minus what the script itself needs)? Or will too many SSH connections still cause hindrance?
Note - I do not have HTTPS setup right now. But planning to switch over to it. And just then, this issue cropped up.
SSH will not speed up your program. What you can do, is to create your own server on the destination server (which will receive data). The web server do much more than just receive data, like interpret HTTP headers and route your requests to files. Your own server can do the same job in a much lighter way.
http://br.php.net/manual/en/sockets.examples.php has an example of how to do this.
I would use normal HTTP, but encrypt the data sent.
How big is the data you're wanting to transfer? Depending on the size SSH (in particular, SFTP) might very well be better. I say that because... well, when was the last time you tried to upload a 10MB file via a webpage and succeeded? Uploading small files works for HTTP but, at the end of the day, it's not a file transfer protocol.
My recommendation would be to use phpseclib, a pure PHP SFTP implementation. Upload a file via SFTP and then run a PHP script on that file via SSH.

Securly transferring data from server to external database

Reason: We have a new client that wishes for the database containing all their info to be stored on their own personal database server. However the web server will be located at another location.
Question How can you secure the data from the time it is inputed until the time an external database saves it?
Through some reading it seems that SSL will only cover so much and that some sort of a secure connection must be set up between the two. Or does the SSL cover this connection as well? It somewhat seems that it should.
SSL provides a reasonable solution to transport security (keeping the data safe from prying eyes as it goes over the wire).
Lock down the endpoints between the two systems as far as practical. For example, in addition to encryption, our firewall blocks physical access to the database except from well-known IP addresses.
You still need to ensure that your web server is secure (since the data is available unencrypted there), and that their database server is secure (including encryption of sensitive data when stored in database tables).