Does Base64 encoding speed up the upload time for an image uploaded via mobile to server? - file-upload

As per the title I'm trying to upload an image from the mobile to the server and it takes time even though it's compressed. I have applied based64 encoding but I don't feel a significant change in the time. Can someone suggest a better approach in which less bandwidth is consumed? Thank you

Related

Real Time screen grabbing and streaming with libav-tools

For my school project I have to stream screen grabbing from 1 station (i.e. server) to another (i.e. client) in Real Time, both running linux (ubuntu).
I'm using libav-tools (avconv as the encoder on the server side and avplay as the player on the client side)
avconv uses x11grab format to grab from the screen.
My problem is: avconv needs a few seconds to output the encoded video. this wait is too long for RT.
I've tried streaming to localhost to avoid network influence on speed, it still seems that avconv is responsible for the long wait.
Also, streaming a video file seems to be much faster, almost immediately.
The project is implemented in C++ and executes avconv in a fork.
Any suggestions as to shortening the procedure?
This is most likely due to internal buffering. There is often a buffer which is way too big on default. That is because having no delay is not the primary concern of most software, they are more concerned with bad connections and that sort of problems, which is what buffers are for.
See https://libav.org/avconv.html, search for "nobuffer" or "-analyzeduration" or "-rtbufsize" or "-max_delay" or "-fpsprobesize" or "rtmp_buffer" (if you use rtmp) or others and try your luck.
There will always be a noticable delay, especially if you use an encodings like h264 for transfer. But a few seconds it does not need to be in a controlled environment. You should be able to bring it down to fractions of a second.

Is boto slow for large files?

I'm doing a few performance tests for uploading large files, on the order to 100MB+. I've read postings about breaking things up and uploading pieces in parallel, but I'm just trying to figure out how fast a large file can go.
When I do my upload and watch the performance with collectl, second-by-second, I'm never getting over 5MB/sec. On the other hand if I reduce the filesize to just 50MB I can do uploads at 20MB/sec.
Is there some magic going on that's based on filesize? is there a way to make my single 100MB file upload faster? What would happen if it were 500MB or even 5G?
hmm, I tried it a number of times and consistently got 5MB/sec and now when I tried it again I got over 15. Is this because I'm sharing bandwidth?
-mark
There is definitely not any magic going on in boto that would account for the variability you are observing. There are so many variables in this equation, e.g. your own connection to the internet, your provider's connection to the backbone, overall network traffic, the load on S3, etc. that it is extremely difficult to get a definitive answer.
In general, I have found that I can achieve the best performance by using multipart upload and some sort of concurrency. The s3put command line utility in boto provides an example of one way to do this. Also, if your S3 bucket is located in a specific region you might see better performance if you connect to that particular endpoint rather than the generic S3 endpoint.

Using AWS S3 for photo storage

I'm going to be using S3 to store user uploaded photos. Obviously, I wont be serving the image files to user agents without resizing them down. However, not one size would do, as some thumbnails will be smaller than other larger previews. So, I was thinking of making a standard set of dimensions scaling from the lowest 16x16 to some highest 1024x1024. Is this a good way to solve this problem? What if I need a new size later on? How would you solve this?
Pre-generating different sizes and storing them in S3 is a fine approach, especially if you know what sizes you need, are likely to use all of the sizes for all of the images, and don't have so many images and sizes that the storage cost is excessive.
Here's another approach I use when I don't want to pre-generate and store all the different sizes for every image, or when I don't know what sizes I will want to use in the future:
Store the original size in S3.
Run a web server that can generate any desired size from the original image on request.
Stick a CDN (CloudFront) in front of the web server.
Now, your web site or application can request a URL like /16x16/someimage.jpg from CloudFront. The first time this happens, CloudFront will get the resized image from your web server, but then CloudFront will cache the image and serve it for you, greatly reducing the amount of traffic that hits your web server.
Here's a service that resizes images from arbitrary URLs, serving them through CloudFront: http://filter.to
This sounds like a good approach. Depending on your application you should define a set of thumbnail sizes that you always generate. But also store the original user file, if your requirements change later. When you want to add a new thumbnail size, you can iterate over all original files and generate the new thumbnails from it. This option gives you flexibilty for later.

Photo resize. Client-side or server-side?

I create a photo-gallery site. I want an each photo to have 3 or 4 instances with different sizes (including original photo).
Is better to resize a photo on client-side (using Flash or HTML5) and upload all the instances of this photo to a server separately? Or it's better to upload a photo to a server only one time, but resize it using server resources (for example GD)?
What would be your suggestions?
Also it's interesting to know, how does big sites do this work? For example 500px.com (this site for each photo creates 4 instances and all works fast enough) or Facebook.
There are several schools of thought on this topic, it really comes down to how many images you have an how likely it is that the images will be viewed more than once. It is most common for all of the image sizes to be created using a tool like Adobe Photoshop, GIMP, Sizzlepig or GD (locally or on A server, not necessarily the web server) then upload all the assets to the server.
Resizing before you host the image takes some of the strain off of the end-user's web browser and more importantly reduces the amount of bandwidth required to host the site (especially useful when you are running a large site and paying per GB transferred)
To answer your part about really big sites, some do image scaling ahead of time, others do it on the fly, but typically it's done server side.

Best way to manage probably huge photo library with iPhone SDK

I'm developing a app with a list of products. I wanna let the user have 1 picture for each products.
Now, the problem is what to do next. I think that the best way is that the photos get sync when the user connect to their computer & itunes, and acces them from the app (something like: /photos/catalog/ref1.jpg.
The other option is put them on my sqlite database, but I worry that that get bigger. I have data + picture, data change a lot but pictures are rarely modified (if much, I expect the user take 2-3 new pictures each time).
I would just use the network connection available on the device, and not bother with sync through iTunes.
When you download the images, write them to the apps Documents folder, then load them from there. Network usage vs. disk space will be concern. Keep in mind some carrier networks can be crazy expensive for data transfer.
If the images are named with a systematic format, then you can maintain them by comparing the image identifiers against your data, pruning out the older or irrelevant ones.
Do the math and ballpark just how much disk space you think it would take for a local copy of all the images.