I heared that the websockets (e.g. socket.io) are very fast, but they require direct connection for each client. Is it so sutable for uploading files for video hostings with many clients/ frequently uploads? Or will it fail and only ajax can be used in that case?
I'd say it depends on the file sizes and how long connections to clients last.
If you chunk uploads using the HTML5 FileAPI, then use Websockets to upload the data, this can dramatically reduce the amount of data transferred because they don't need to send HTTP headers with every request; these can add up if for example you split a 1GB file into 5MB chunks.
If clients are persistently connected then Websockets can reduce the need to do long polling, wasting resources on your server if there is no new information to push to the client.
Websockets will therefore reduce the resources required but they are not available on every browser.
Related
I am uploading very large (gigabytes) files to an application server, with Apache as a proxy.
I want to stream process these to (1) use less memory and (2) prevent timeout from an AWS load balancer as I process the upload.
It seems that Apache does a substantial amount of buffering when I upload. If this is too much, I can't accomplish these objectives.
What determines the amount of upload buffering, and how can I configure it?
NOTE: To be clear, I am asking about Apache, not PHP.
I am building a project which requires constant connection with the server.
There are two major ways to achieve this:
Ajax pull
Ajax push
I have to decide between pinging a server (expensive) and maintaining keep-alive connections (firewalls block that.)
I was thinking about the live video streams. They are not keep-alive connections, nor frequent pings.
Is it possible, to send data, like JSON strings through rtmp?
It would be theoretically possible to implement RTMP's AMF3 and AMF0 Message types to carry the data. RTMP [Wikipedia]
The problem is that using a protocol typically used for streaming video might get your connection blocked or throttled by some service providers that limit such protocols to conserve bandwidth (and prevent employees from watching internet videos at work).
Maybe this article may be of some use to you. It explains how to set up an RTMP server with nginx.
From the article:
nginx is an extremely lightweight web server, but someone wrote a RTMP module for it, so it can host RTMP streams too. However, to add the RTMP module, we have to compile nginx from source rather than use the apt package. Don't worry, it's really easy. Just follow these instructions. :)
One comment on this article by a user named 'stefaniuk' linked to a github respitory for this that I think you should look in to. Check it out here.
I have this problem in which I need to compress (zip) multiple files from the web app (on the server) before streaming it down to the browser. I'm pulling the files from a separate service that connects to a SQL database. So there's a huge delay in opening the files from the service as well as a delay in compressing the files before the zipped package can be streamed to the browser. Ideally, I would like to have the DOWNLOAD button on the page make a call to a SignalR method on the server which will then push a notification back to the client once the multiple files are done compressing. That way, the browser won't request the server to stream the zipped file right away. It will only begin streaming once the multiple files are done compressing.
background info: I'm using IIS 7.5 and MVC 4.
I've been reading up and watching videos on SignalR, but have only seen examples of chat hubs and pushing to multiple clients, etc. Would it be possible to only use SignalR for the client that is making the request? And if so, I would appreciate some example code or perhaps a link to a tutorial on how one could accomplish something like this. Thanks!
To achieve what you need, you will have to define 3 clients
The Browser, it will call The Hub when a download is requested, then it will wait for a call from The Hub to download the files.
The Server, receives a notification from The Hub when the browser requests a download, and when all is ready calls The Hub to pass the files.
The Service, received the files from The Hub when its passed from The Server, and make the files ready for download, then send a notification to The Hub to inform The Browser.
Note
Storing large files in memory is not recommended, and passing it through SignalR is not as well, unless its the only way the server and the service can share the files, so if you have a common storage -Disk or Database- then its better to use it
What I'm trying to do is to implement a web server using Netty to store large file uploads to HDFS as HDFS files.
My basic work flow is as follows:
an end user send an HTTP PUT/POST request (with payload say 1GB) to my Netty server
Server accepts the HTTP connection and parses method/uri/headers
Server makes a call in DFSClient to HDFS to create a file and obtain a handle (DFSClient.create is a blocking call)
Server receives the rest of the upstream in the HTTP request and writes to the HDFS handle chunk by chunk (writing each chunk to the HDFS handle is a blocking call)
server closes the HDFS handle and acknowledge back to client (closing HDFS handle is blocking call)
I'm having problems in making the above steps work. Because I don't know what is the best way to efficiently make blocking calls in Netty (blocking the whole event loop as little as possible).
Can anybody show me a correct way of implementing the above logic? Many thanks in advance!
If you need to block you will need to put an ExecutionHandler in front of the handler that will perform the blocking operation to not affect the other channels on the same IO-Thread.
I am trying to simulate a slow http read attack against apache server running on my localhost.
But it seems like, the server does not complain and simply waits forever for the client to read.
This is what I do:
Request a huge file (say ~1MB) from the http server
Read the response from the server in a loop waiting 100 secs before successive reads
Since the file is huge and the client receive buffer is small, the server has to send the file in multiple chunks. But, at the client side, I wait for 100 secs between successive reads. As a result, the server often polls the client and finds that, the receive window size of the client is zero since the client has not yet read the receive buffer.
But it looks like the server does not bother to break the connection and it silently keeps polling the client. Server sends the data when the client window size is > 0 and again goes back to wait for the client.
I want to know whether there are any apache config parameters that I can set to break the connection from the server side after waiting sometime for the client to read the data.
Perhaps this would be more useful to you, (simpler and saves you time): http://ha.ckers.org/slowloris/ which is a Perl script that sends partial HTTP requests, the Apache server leaves the connection open (now unavailable to new users) and if executed on a Linux environment, (Linux does not limit threads beyond hardware capability) you can effectively block all open sockets, and in turn prevent other users from accessing the server. It uses minimal bandwidth because it does not "flood" the server with requests, it simply slowly takes the sockets hostage. You can download the file here: http://ha.ckers.org/slowloris/slowloris.pl
To prevent an attack like this (well, mitigate) see here: https://serverfault.com/questions/32361/how-to-best-defend-against-a-slowloris-dos-attack-against-an-apache-web-server
You could also use a load-balancer or round-robin setup.
Try slowhttptest to test the slow read attack you're describing. (It can also be used to test slow sending of headers.)