I have configured my drupal site so that all images/files/media etc is handled my s3 by using S3 file system module.
Now everything works fine, the image/file/ field uploader works fine but there is a huge performance issue when using IMCE file browser from the WYSIWYG editor. It takes at least a minute for the browser to display its content and there are only 290 images with 78 MB used in that initial folder which should not cause such huge delays. This is having a huge impact for our editors and several minutes lost just to upload a couple of images.
I tried various pagination patch and there is no difference at all in the performance.
What are my options now
As drilled through many forms and discussions, turns out that IMCE was not meant for S3 file system and I found this patch in pdf form(warning downloads rather than opens )
I followed the steps in that patch which significantly improved my performance.
Related
I have a url in my website that opens a pdf page that is loaded from Google Drive using https://docs.google.com/document/d/[document id]/export?format=pdf url, using a simple php readfile code. However, the generated pdf does not have metadata such as Title, Author, and Description.
What is the best way serve the pdf with the updated metadata?
Caveats:
The website uses a shared cPanel web hosting.
I cannot install perl modules.
The host doesn't provide native php pdf support, and I don't have access to composer.
The only supported server-side languages are perl and php.
The only acceptable solution I found was using ConvertApi, which has a very limited free account (1500 seconds). However, I can get around that by caching the pdf and either retrieving a new copy when it's been over a day since the it was last updated or when I pass an argument to force it to re-cache.
Do you recommend any other solutions? I very much rather have set-it-and-forget-it solution.
Or is 1500 seconds enough for a file that is rarely going to be used?
I'm hosting an HLS stream with XAMPP / Apache, which basically means I have a folder in my document root that contains a couple of incrementally numbered 10-second video files.
Every 10 seconds, a new video file is saved into the folder and the oldest video file in the folder is deleted.
Apart from these video files, the document root also contains some other files, such as PHP scripts and playlist files.
My server has plenty of RAM and a pretty fast CPU, but is using a comparatively slow hard disk.
Given the fact that the constant downloading of these video files is likely what's going to make or break the server performance, it seems like a good idea to cache these files in memory.
If Apache were to keep all video files (with a .ts extension) that're downloaded by a user's video player, in it's memory for about 60 seconds, the next user would then be able to download the file much faster. Apache could rely on the files not changing after the first open and on the fact that the files won't be requested anymore after those 60 seconds.
All other files do not (necessarily) have to be cached, since they're rather small and are regularly modified.
Is anyone able to give me directions on how to get started?
Modern operating systems already cache accessed files in memory. The whole process is managed by the kernel automatically.
Apache in-memory caching won't help you since it needs all the files at start-up.
If you want some level of control over the caching you could use vmtouch. Check the manual.
We have an MVC web site deployed in a Cloud Service on Microsoft Azure. For boosting performance, some of my colleagues suggested that we avoid the bundling and minification provided by ASP.NET MVC4 and instead store the .js and .css files on an Azure blob. Please note that the solution does not use a CDN, it merely serves the files from a blob.
My take on this is that just serving the files this way will not cause any major performance benefits. Since we are not using a CDN, the files will get served from the region in which our storage is deployed all the time. Ever time a user requests a page, at least for the first time, the data will flow across the data center boundary and that will in turn incur cost. Also, since they are not bundled but kept as individual files, the server requests will be more. So we are forfeiting the benefits of bundling and minification. The only benefit I see to this approach is that we can do changes to the .js and .css files and upload them without a need to re-deploy.
Can anyone please tell me which of the two options is preferable in terms of performance?
I can't see how this would be better than bundling and minification unless the intent is to blob store your minified and bundled files. The whole idea is to reduce requests to the server because javascript processes a single thread at a time and in addition there's added download time, I'd do everything I can to reduce that request count.
As a separate note on the image side, I'd also combine images into a single image and use css sprites ala: http://vswebessentials.com/features/bundling
I don't have a server to distribute a Safari extension I made or to deploy updates. Is there a free service I can use instead of putting it on a file sharing website and posting to reddit?
I ended up using Amazon S3.
Just upload the .plist file and link everything up to each other. For low traffic, you won't be charged anything. With a few hundreds of users, it doesn't cost me more than a few cents every month. Keep in mind that your users' browsers will query your .plist file every time they open, so the traffic may pile up that way.
I wrote a detailed tutorial here.
We have upgraded to ColdFusion 10 and I am testing large upload capability.
Using both a HTML form and the flash multi-file upload CFFILEUPLOAD I can upload files of up to 2GB.
With files over 2gb the upload does not even start. 0% both with the flash upload and what chrome browser reports with HTML form.
Technical services suggest it does not even get as far as Apache, that is not restricting the upload. ColdFusion is also setup to allow 4000MB post data even with throttle.
The upload is occurring across the network, so even with test a 1.7gb file it doesn't take long - but 2.5gb does not even begin.
Any suggestions to help diagnose the cause?
Thanks