ImageResizer downloads an image multiple times? - imageresizer

I followed this article: http://imageresizing.net/blog/2013/effortless-responsive-images.
My images are stored on a CDN and after installing all the nuget packages, I got resizing to work, but the problem I ran into was that I had to add style="max-width:100%" to most of the images.
Also, I have a page where the same image appears in multiple spots and I guess Image Resizer thinks that this these spots should contain different sizes of the image, so it downloads 3 different versions which sort of defeats the purpose. Is this how it is supposed to work naturally?
As an example, I have imageA.png on a page and it might be in the top, middle, and bottom. Image Resizer is downloading a different version for each section.
What is the best way to use imageresizer with srcset? I can't seem to find any thing on it.
If I use the DiskCache plugin, will this serve images to other users that request the same size or is it just for the current user requesting it?

I'll try to break apart your 4-question question.
style="max-width:100%" to most of the images
CSS like img {max-width:100%} can do this globally. This rule is present by default in many themes/frameworks.
If an image appears in multiple spots, and those spots require differently sized/cropped versions of the image, there will be multiple requests. This is how it is supposed to work.
ImageResizer responds to URLs like "image.jpg?width=100" Just use those URLs as you would when using srcset normally. Here's the webkit demo.
DiskCache is not per-user. It is a global cache. It does re-apply authorization rules before serving from the cache.

Related

TinyMCE 5 - large images pasted via Safari do not render correctly

We are running TinyMCE version 5.4.1 with various options including:
paste_data_images: true
powerpaste_allow_local_image: true
When we drag & drop (or paste) in smaller images (400px X 400px) everything seems to work fine. The Base64 encoding is saved to the database and the image is rendered from all browsers, Chrome, Firefox and Safari.
However, when we paste in a larger image (1920px x 1081px) the image is only saved and rendered correctly in Chrome and Firefox. In Safari the Base64 encoding is saved with all lowercase characters. Therefore it doesn't render when attempting to view it. Has anyone else experienced this?
I have searched here as well as on the TinyMCE website but don't see anything mentioning this behavior. We will eventually attempt to move away from this Base64 implementation as it's no longer recommended but it's what we have for the time being so I'm just trying to address this issue.
When the page loads, its' elements can do so in parallel. But when the browser sees the base64 image, it blocks the page from loading until this image is rendered. Thus, inserting large images into the page as base64 is certainly not a good practice - it may slow down page loads and worsen the UX.
To fix this problem and maybe several other issues, utilizing the automatic_uploads option is highly recommended. It will upload pasted images on the server instead of converting them to base64. Here is the example of the PHP upload handler that will upload images and give their URLs back to TinyMCE.
Concerning the issue with Safari, some minimal reproducible example would be very useful.
I should also mention that PowerPaste is a premium feature that will not work with TinyMCE opensource. If you are using the paid version of TinyMCE, you can create a support ticket.

Upload same image in different sizes - Dropzonejs

There is any possibility to upload a file with different sizes in DropzoneJs?
I'm using vue-dropzone which is made with dropzonejs and i have to upload the same file with different sizing for srcset.
Example:
I want to upload the file test.png which is 1000x500 px. There is any possibility to upload it at the same time in original resolution and also in 500x250px?
Image resizing in the browser has been a seat-of-the pants experience for a long time. Web assemblies are the way of the future for processing-intensive tasks in web apps. I came across this project the other day. It looks fantastic and I really can't wait to strip out our home-baked image resizing with canvas and replace it with this.
The usual reason for doing this is to avoid large uploads. It's a little bit weird to want to resize in the browser then upload the original. You might be better resizing on the server. You'll save bandwidth and the server libraries will be more mature than what's available on the client.
Along with the original image object you can add one more your custom resized image to the array of images by using resize config of dropzone. You can do the above on drop event or adddedFile event of dropzone.

Banner image doesn't load with Hippo CMS

I have been applying custom html/css layout to hippo. in homepage-main.ftl I have an image which is 1366x518 and ~400KB in size. Here's how it's implemented.
<img src="<#hst.webfile path="/images/homebanner.jpg"/>"/> However, when I run Hippo CMS it doesn't load (404 error in Chrome dev tool console) the banner image but it shows all the other images. I checked cms.war and i found this image inside cms->WEB-INF->lib->[projectname]-bootstrap-content-[snapshot version].jar. I put a small size image instead of homebanner.jpg and it worked. I am not sure whether this is an issue on Hippo CMS or Tomcat 8 configuration. any answer would be really appreciated.
Simple answer:
Webfiles are limited to 256kb by default, it won't pick up anything bigger.
See also: http://www.onehippo.org/library/concepts/web-files/web-files-configuration.html if you want to change the max file size.
I would reccomend looking at making your banner configurable from within the CMS and using Imagesets for larger sizes.

SEO Considerations with CQ/AEM Image Component

It's come up recently at my job that the SEO guys for our customer are unhappy with the src attribute values being generated in our img tags on their CQ/AEM-based website. I know next-to-nothing about SEO, so I won't pretend to understand, but it seems they have a point. We're not using the out-of-the-box image component per se, but the behavior is the same.
The src attribute of the img tags gets the path of the image node, with the img selector and some other stuff appended to it. This of course causes the request to go through the image servlet, which is then responsible for drawing the image. If I understand correctly, it's done this way to support things like the crop/resize/etc tools available in the html5smartimage widget. The servlet applies these edits to the image and renders the altered image.
The complaint is that the actual file name for the images are nowhere to be seen in that src attribute. I'm operating on the assumption that this is a valid complaint, but I really don't know if it is. I'm likely going to be asked to jump through hoops to change this behavior so the src attribute references the image by its direct path in the DAM.
Are these valid complaints? If the complaints are valid, why would the image component work this way? Should the title/alt values be considered sufficient for SEO purposes? If my customer is not using the extra features from html5smartimage, is there any other reason why I should not just address the images by their explicit DAM path? I've already worked out what I think is the best solution, but I'd like to be armed with more information before taking that plunge.
image component as it is allows you to have server side modified layouts of the same image (with usual transformations like cropping, rotation, ...) customised for each usage of it, that is different content for each usage (with one original image, and different settings in each component).
This has the drawback as you mention to locate the src of the image in a rather unfriendly URL in terms of SEO (i.e. where the component content is)
If you only want ONE version of one image, you'd surely should refer directly to the DAM image (or whatever image hosting you use).

Screen Scraping with HTTP Headers Issue - I Think

I've been trying to figure this one out for about a week now and just
can't come up with a good solution. So, I figured I would see if anyone could help me out. Here's one of the links that I'm trying to scrape:
http://content.lib.washington.edu/cdm4/item_viewer.php?CISOROOT=/alaskawcanada&CISOPTR=491&CISOBOX=1&REC=4
I right-clicked to copy image location.
This is the link that is copied:
(Can't paste this as a link because I'm new)
http:// content (dot) lib (dot) washington (dot) edu/cgi-bin/getimage.exe?CISOROOT=/alaskawcanada&CISOPTR=491&DMSCALE=100.00000&DMWIDTH=802&DMHEIGHT=657.890625&DMX=0&DMY=0&DMTEXT=%20NA3050%20%09AWC0644%20AWC0388%20AWC0074%20AWC0575&REC=4&DMTHUMB=0&DMROTATE=0
There is no clear image URL being displayed. Obviously that's
because the image is hidden behind some type of script. Through trial and
error I found that I can put ".jpg" after the "CISOPTR=491" and then the link becomes an Image URL. The problem is that this is not the high-resolution version of the image. To get to the
high-resolution version I have to change the URL even more. I found a lot of articles #Stackoverflow.com to mention trying to build a script using curl and PHP, I have even tried a few of them with no luck. "491" is the image number and I can change that number to find other images in the same directory. So, scraping a sequence of numbers should be pretty easy. But I'm still a noob at scraping and this one is kicking my butt. Here's what I've tried.
Get remote image using cURL then resample
also tried this.
http://psung.blogspot.com/2008/06/using-wget-or-curl-to-download-web.html
I also have Outwit Hub, and Site Sucker, but they don't recognize the URL as an image file and fo they just pass right ove it. I used SiteSucker overnight and it download 40,000 files and only 60 were jpegs, none of which were the ones I wanted.
The other thing I keep running into, is the files I have been able to download manually, the filename is always either getfile.exe or showfile.exe and then if I manually add ".jpg" as the extension I can view the image locally.
How can I reached the original high-res image file, and automate the download process so that I can scrape a couple hundred of these images?
I right-clicked to copy image location. This is the link that is
copied:
You noticed the title has ".exe" in there. Look at the stuff in the query string:
DMSCALE=100.00000
DMWIDTH=802
DMHEIGHT=657.890625
DMX=0
DMY=0
DMTEXT=%20NA3050%20%09AWC0644%20AWC0388%20AWC0074%20AWC0575
REC=4
DMTHUMB=0
DMROTATE=0
Strongly implies the original source of this image is in a database or something and it is being passed thru a server-side filter (not sure if that is what you meant by "some kind of script"). Ie, this is dynamically generated content, not static, and the same caveats apply as would to dynamic text content: you have to figure out what instructions to provide the server to get it to cough up what you want. Which you pretty much have in front of you...if SiteSucker or whatever won't deal with it properly, scrape the address yourself using an HTML parser.