Progressive JPG over HTTPS (SSL/TLS) - ssl

Progressive JPG is a image format, that contains low quality snapshots. These snapshots can be displayed during the transmission of the higher quality snapshot, which gives better end-user experience.
Progressive JPG is a image format, that can be progressively displayed in the web browser during its transmission.
Can end-user benefit form the progressive JPG, if the file is transferred over HTTPS? If the image is encrypted, snapshots would have to be separated...

Progressive JPG is a image format, that contains low quality snapshots
No, that’s not true¹. It is only one image – but saved in a way, that some ”broad”, lower-detail information is encoded first, and then more detailed data is coming later on, so that rendering can start early in lower quality, and as more details become available, the rendering can become more refined.
And serving such an image via HTTPS does not change that.
¹ Images can contain additional thumbnail version(s) inside the EXIF meta data – but that is not what the “progressive” format is about. And that feature is seldom used on the web, because it would increase the overall amount of data to be transferred (and I am not even sure if common browsers support this, in that they would display such a thumbnail first and then the full image once it is done loading.)

Related

Resize image downloads for IPFS assets in CloudFlare

I am writing a Swift iOS app that uses Blockfrost.io to download assets from the Cardano blockchain. The asset's images come in the format ipfs://QmSJPFVN..., which can be retrieved by using the URI in a CloudFlare URL, like this https://cloudflare-ipfs.com/ipfs/QmSJPFVN....
My issue is that most of the images I'm trying to fetch and display are enormous, and it's seriously slowing down my UI. Are there parameters that can be added to the URL to specify a smaller image size to be fetched? I've looked around for a solution but haven't been able to find any.
You have two options for this -
Use a 'proxy' to fetch the image server-side and convert before downloading. Could make use of a Cloudflare worker for instance - https://developers.cloudflare.com/images/image-resizing/resize-with-workers
Download the full size image, but convert it within your app before displaying it in the UI. You'll still use full amount of bandwidth in this approach, but may reduce complexity.

Fetching content binary from database or fetching content by its link from storage service

For an app (web + phone) there are two options:
Image binaries in database. Server replies to app HTTP request with images as base64
Images in storage service like Amazon S3 or Azure Blob Storage or a self-hosted one. Image links in database. Server handles app HTTP requests by sending back only the links to images. The app fetches the images from storage by their link
Which option above is the standard practice? Which one has less trouble down the road?
To some extent, the answer to this question is always opinion based, and partly depends on the specific use case.
I would think that the second approach is used more often. One reason is that normally, storage within a database is slightly more expensive than file storage in many cases. Also, what is the real use case? Assuming you use HTML pages that reference images via the img element or via CSS as background image, then the base64 return value would not be that useful, and OTOH the more complicated graphic at the bottom of your picture would get a bit more simple from the client view: The resolution of the link would be resolved by the server when generating the HTML and determine the src of the img, and then the browser would simply apply standard HTML logic and request the image data from the storage service via HTTP.
However, if you would want to optimize load times (and your images would be more or less unique per page so that browser caching of images across pages would not help much), then you could use data URLs embedded into the HTML, and then the first approach could potentially be useful. In this case, all the logic including the generation of the data URL within the HTML would be handled on the server, and the browser would have a single http request.

Best streaming service for mp4 into webview

For the welcome screen of my app, we are trying to serve up a webpage in a webview that consists of a video and some text. (We want to go this route so that we could quickly update the welcome screen and test changes on the fly, versus having to submit and get approval each time.)
The video is only 8.6mB and is currently being played via HTML5 , hosted on an S3 and served via CloudFront. However, the playback still tends to be a bit choppy at times. Does anyone have any recommendations as to an optimal way to host and serve up the video to make it play smoothly? Are there any specific settings for the S3 or CloudFront anyone would recommend that could help?
Thanks in advance for any help anyone can provide.
The most common technique currently is to use ABR in parallel with a CDN to provide smooth playback.
ABR, Adaptive Bit Rate, involves making multiple copies of the video at different bit rates, from low to high and hosting these on the server.
The client receives an index file for the videos, e.g. an m3u8 manifest file, and then chooses the best bit rate for the current conditions to allow smooth playback without buffering.
If network conditions improve the client will 'step up' bit rates and if it gets worse it will 'step down' bit rates.
Typically a low or medium bit rate is chosen as the first one to allow quick and smooth start up.
You can see this effect on services like Netflix as they start up, and you can also see it on YouTube if you right click the video and select 'Stats for Nerds'.
Some links for ABR in AWS Elastic transcoding - you can set the bit rates you want, for e.g. see the note below from their FAQ re HLS jobs:
Specify that the transcoding job create a playlist that references the outputs. You should order your bit rates from lowest to highest, with the audio only stream last, since this order will be maintained in the generated playlist file. Once your transcoding job has completed, the output bucket will contain a proper arrangement of your master and individual M3U8 playlists, and MPEG-2 TS media stream fragments.
Take a look at the sample request on this page here which includes two different bit rates (video service providers will generally have more than 2 but this gives you a feel for the approach):
http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/create-job.html
Azure Media Services has a built in "Adaptive Streaming" preset that is content-adaptive and can adjust the encoding settings to meet your content that is coming in.
See the following - https://learn.microsoft.com/en-us/azure/media-services/media-services-autogen-bitrate-ladder-with-mes

Store Thumbnails in Redis

For a messenger app I store the latest messages in Redis.
They will be kept for 24 hours. Along with each message I have a thumbnail image.
Is it a good approach to store the thumbnail (2KB each) along with the message in Redis? It would make fetching the messages much faster since I get the message and the image in one transaction.
Or should the thumbnail be stored in S3 despite the fact that I need an additional PUT and GET request per message?
Edit:
The thumbnails are different per message. A message consists out of a text and a link to an image. While the full resolution image is stored on S3, the message saved in Redis contains only a link to it.
The client is an iOS app. The app collects all messages from Redis. If the message contains an image, only the thumbnail should be shown before downloading the full resolution file.
The application design must allow thousands of requests / second.
See WhatsApp example:
Edit:
I calculated the AWS cost for both options.
Redis: Redis would cost 3k USD for 120 Million messages.
S3: An additional PUT request per message would double the S3 costs. 10k USD for 1B messages / month
Let's assume this is your requirement:
An iOS app instant messaging app;
There will be 1k/s messages;
If the message contains preview-able information, like video/img, a thumbnail should be displayed.
Some inferred conditions:
There might be 3k/s messages during peak;
There might be 3k/s preview-able messages during peak.
I assume the other part of your system is well-done, and won't have bottle neck. 1k/s messages means you need to do at least 1k write per second to redis, that's totally nothing to redis. Then you are asking if you need to store the thumbnail of preview-able information as well in the redis, and my quick and personal answer is NO.
Client Aspect
First question you should ask yourself is, does respond time really matter for client in this case? Would the missing of the preview be a big trouble and cause a major user experience degradation? Are there any ways to bear with slow respond time while maintaining a relatively high UX?
I believe users won't be too unhappy if he/she didn't see a preview of a video/img, compared to missing the video/img link. I agree that missing an img preview may cause some UX degradation, but why you would display it something saying "I'm bad please blame me"? You could display the img whenever you received the full thumbnail.
Server Aspect
First question you should ask is, does caching give any more benefit than uploading? Besides, does caching introduce any problems?
Since you might not have good control on the thumbnail size, pushing to redis might take longer and consume more resource than you expected. And this may cause some issues on writing text messages into redis. Also, if you store the thumbnail in redis, you need to require the thumbnail through your server, which is one more request, and a big response.
Suggestions
Don't store in redis, just generate the thumbnail and upload to S3. Trust amazon, they are good, for most of the time.
But wait, are we done? Absolutely no. Why we need to upload the image to our server first, then asks the server to generate thumbnail upload them? Why can't we just do it on the client side?
Yes, that's another solution. Compress the picture, upload thumbnail and full size to S3, and get a link to it, and send the link to server. Then the server will send this link to the other client, and the other client will fetch the image from S3.
In this way, your server won't be flooded by huge images, even during peak.
Concerns
There are of course quite a lot concerns: how to handle upload failure case? How to handle malicious abusing actions? How to handle duplicated images (like stickers)? How to link an image with a chat room?
I will leave these questions to you, since some of them are biz logic related :)
Last Words
DO do load test and benchmark using good simulation of traffic and good logging so that you know where's the bottle neck, and could optimize wisely.
And always remember: Get it run first, then get it right, and get it fast only if you have enough motivation and strong reason. Premature optimization is the root of all evil, and, a waste of time.

Pseudostreaming, byte range request, & mp4 fragmenting

Looking first for links to good documentation that correctly explains pseudostreaming, byte range requests and mp4 fragmenting. Note, I will be using only the mp4 container (h264 codec) and HTML5 video (no flash).
My understanding of pseudostreaming is that the client can send off a start parameter that the server "seeks" to in it's response. MOOV data must be upfront and it implicitly implies that buffering of the original source stops in favor of the new response starting at the "start"/seek position. How is the client forced to make pseudo calls? Does the MP4 have to formatted in a special way?
Byte range requests are different send rather than just a start parameter a range is sent. Sounds more like progressive downloading. How would "seeking" work? Does it with byte range? Can the segment size be pre-determined with movie box information?
How does MP4 fragmentation fit in? Looking like a construct originally designed by microsoft for silverlight. But is it applicable to other browser html5 video implementations?
Finding it difficult to sort out information on the web. Looking to both live feed and take historical segments of h264 files produced from rtp camera streams. Got a bunch of files time-ordered in a MongoDB. Created my own h264 decoder in JavaScript and can construct mpeg-dash boxes on the fly off a range query. Using Chrome's support for MSE to append segments. Works great but not a universal solution. Want to fall back on other techniques other than flash but with the html5 video.