I have been storing the template locally as a HTML file and sending that as part of the body. What is the benefits of storing the templates in AWS SES? Is It that my load for the server is reduced due to not having to send over the HTML on every call made?
Related
User will upload the videos from browser which need to be stored on server and playback. First understanding from google suggest I need to go for HTTP live streaming(HLS) here.
But I am not sure how it works internally ?
There are three components in above workflow i.e. client/server/data store for saving and retrieving videos.
Save flow :
I believe I need to plugin the HLS client for sending the streaming data.
Does Client in itself divide the file into chunks while sending and maintain the chaining of these chunk where each chunks points to next one ? something like this as I believe server is dumb and will work in same fashion as http upload functionality and no other intelligence is required here ?
But not sure how HLS server side component works here i.e. Will it save as single file or single file is split in to multiple files and then saved on disk ?
I believe it store the file as single file like regular http upload file ?
Retrieval part
In normal regular http file download, client asks for file data and server sends the response back in chunks but all response chunks are sent back against the same request.
I believe in case of HLS , its pull based where client initiate the pull request for each stream request. In each chunk pull request client gets the file name of next chunk and send the request to serverthe relevant chunk from single file for each poll request etc ? So for server its kind of regular http file download request and all intelligence lies with client
Save flow: When you upload a video, it must be converted into HLS format. You can use FFMPEG to do that. You'll end up creating manifest files, and all the segments of the video.
Retrieval part:
The player will read the manifest file to know which segments to request. I've written a post on how HLS playback works with the manifest files: https://api.video/blog/video-trends/what-is-hls-video-streaming-and-how-does-it-work
We are developing an application for Web. Inside that application, to download a file, I have created a WCF Rest service that will download the files based on this link Download using WCF Rest. The purpose is to check for user authentication before downloading. I used streaming concept to download the file. It is now that I have found out few things
When the user downloads the file, he is not able to determine what are the file size and the time remaining. I analyzed and found out that the reason is because, it’s using the “Transfer Encoding: chunked” in the header so that the file will be downloaded in chunks. One of the advantages is that the memory consumption is less in the server even when there are many users downloading a file. So I thought of adding “Content-Length” header, but I found out that you can use only either one of the headers not both. So I was thinking how Hotmail and Gmail were downloading attachments. From my investigation, I found out that Hotmail uses chunking header whereas Gmail uses Content-length header. Also in the case of Gmail, it is also checking if the session is active or not then downloads the file accordingly. I want to achieve the following
a) Like Gmail, I want to check if the session is active or not and then downloads the files accordingly. What will be the method for me to implement it?
b) When downloading the file, I want to use Content-Length header instead of Chunked header. Also the memory consumption should be less. Can we achieve it in WCF Rest? If so how?
c) Is it possible for me to add a header in WCF that will display the file size in the browser Downloads window?
d) When downloading an inline images from WCF, I found out that the image after loading is not cached in local machine. I was thinking that once an image is shown in an HTML page, it will get automatically cached and the next time user visits the page, the image will load from cache instead from server. I want to cache the inline images to cache, what is the option that I can use for it? Are there any headers that I need to specify when downloading an inline image from server?
e) When I download a zip file using WCF in IPhone Chrome browser, it’s not downloading at all. But the same link works in Android Chrome browser. What could be the problem? Am I missing header in WCF?
Are there any methods that will achieve the above?
Regards,
Jollyguy
I am using PhantomJS 1.9.7 to scrape a web page. I need to send the returned page content to S3. I am currently using the filesystem module included with PhantomJS to save to the local file system and using a php script to scan the directory and ship the files off to S3. I would like to completely bypass the local filesystem and send the files directly from PhantomJS to S3. I could not find a direct way to do this within PhantomJS.
I toyed with the idea of using the child_process module and pass in the content as an argument, like so:
var execFile = require("child_process").execFile;
var page = require('webpage').create();
var content = page.content;
execFile('php', '[path/to/script.php, content]', null, function(err,stdout,stdin){
console.log("execFileSTDOUT:", JSON.stringify(stdout));
console.log("execFileSTDERR:", JSON.stringify(stderr));
});
which would call a php script directly to accomplish the upload. This will require using an additional process to call a CLI command. I am not comfortable with having another asynchronous process running. What I am looking for is a way to send the content directly to S3 from the PhantomJS script similar to what the filesystem module does with the local filesystem.
Any ideas as to how to accomplish this would be appreciated. Thanks!
You could just create and open another page and point it to your S3 service. Amazon S3 has a REST API and a SOAP API and REST seems easier.
For SOAP you will have to manually build the request. The only problem might be the wrong content-type. Though it looks as if it was implemented, but I cannot find a reference in the documentation.
You could also create a form in the page context and send the file that way.
I am attempting to use an S3 bucket as a deployment location for an internal, auto-updating application's files. It would be the location where the new version's files are dumped for the application to puck up on an update. Since this is an internal application, I was hoping to have the URL be private, but to be able to access it using only a URL. I was hoping to look into using third party auto updating software, which means I can't use the Amazon API to access it.
Does anyone know a way to get a URL to a private bucket on S3?
You probably want to use one of the available AWS Software Development Kits (SDKs), which all implement the respective methods to generate these URLs by means of the GetPreSignedURL() method (e.g. Java: generatePresignedUrl(), C#: GetPreSignedURL()):
The GetPreSignedURL operations creates a signed http request. Query
string authentication is useful for giving HTTP or browser access to
resources that would normally require authentication. When using query
string authentication, you create a query, specify an expiration time
for the query, sign it with your signature, place the data in an HTTP
request, and distribute the request to a user or embed the request in
a web page. A PreSigned URL can be generated for GET, PUT and HEAD
operations on your bucket, keys, and versions.
There are a couple of related questions already and e.g. Why is my S3 pre-signed request invalid when I set a response header override that contains a “+”? contains a working sample in C# (aside from the content type issue Ragesh is experiencing of course).
Good luck!
I'm thinking about whether to host uploaded media files (video and audio) on S3 instead of locally. I need to check user's permissions on each download.
So there would be an action like get_file, which first checks the user's permissions and then gets the file from S3 and sends it using send_file to the user.
def get_file
if #user.can_download(params[:file_id])
# first, download the file from S3 and then send it to the user using send_file
end
end
But in this case, the server (unnecessarily) downloads the file first from S3 and then sends it to the user. I thought the use case for S3 was to bypass the Rails/HTTP server stack for reduced load.
Am I thinking this wrong?
PS. I'm using CarrierWave for file uploads. Not sure if that's relevant.
Amazon S3 provides something called RESTful authenticated reads, which are basically timeoutable URLs to otherwise protected content.
CarrierWave provides support for this. Simply declare S3 access policy to authenticated read:
config.s3_access_policy = :authenticated_read
and then model.file.url will automatically generate the RESTful URL.
Typically you'd embed the S3 URL in your page, so that the client's browser fetches the file directly from Amazon. Note however that this exposes the raw unprotected URL. You could name the file with a long hash instead of something predictable, so it's at least not guessable -- but once that URL is exposed, it's essentially open to the Internet. So if you absolutely always need access control on the files, then you'll need to proxy it like you're currently doing. In that case, you may decide it's just better to store the file locally.