Importing #arcgis/core ElevationLayer and Point doubles build size - arcgis

When using the Point and ElevationLayer from #arcgis/core, it doubles my bundle size up to 20MB with 127 files in a React build. Once, I remove the import, it drops down to just 9 files (including src map) and about 10MB.
Any ideas on how to fix this?
import Point from '#arcgis/core/geometry/Point'
import ElevationLayer from '#arcgis/core/layers/ElevationLayer'
I use this to query a LatLng position for elevation.
I'm using #arcgis/core version 4.21.2

This behavior is because bundling is working correctly. Files may be added to on-disk bundles increasing the on-disk footprint size, however that does not necessarily indicate they will be requested by the app at runtime. For example, we ran a quick test and those two imports only increase initial app load size by 150Kb - 200Kb.

Related

FabricJS v3.4.0: Filters & maxTextureSize - performance/size limitations

Intro:
I've been messing with fabricJS image filtering features in an attempt to start using them in my webapp, but i've run into the following.
It seems fabricJS by default only sets the image size cap (textureSize) on filters to be 2048, meaning the largest image is 2048x2048 pixels.
I've attempted to raise the default by calling fabric.isWebGLSupported() and then setting fabric.textureSize = fabric.maxTextureSize, but that still caps it at 4096x4096 pixels, even though my maxTextureSize on my device is in the 16000~ range.
I realize that devices usually report the full value without accounting for current memory actually available, but that still seems like a hard limitation.
So I guess the main issues I'm looking at here to start effectively using this feature:
1- Render blocking applyFilters() method:
The current filter application function seems to be render blocking in the browser, is there a way call it without blocking the rendering, so I can show an indeterministic loading spinner or something?
is it as simple as making the apply filter method async and calling it from somewhere else in the app? (I'm using vue for context, with webpack/babel which polyfills async/await etc.)
2- Size limits:
Is there a way to bypass the size limit on images? I'm looking to filter images up to 4800x7200 pixels
I can think of one way atleast to do this, which is to "break up" the image into smaller images, apply the filters, and then stitch it back together. But I worry it might be a performance hit, as there will be a lot of canvas exports & canvas initializations in this process.
I'm surprised fabricjs doesn't do this "chunking" by default as its quite a comprehensive library, and I think they've already gone to the point where they use webGL shaders (which is a black box to me) for filtering under the hood for performance, is there a better way to do this?
My other solution would be to send the image to a service (one i handroll, or a pre-existing paid one) that applies the filters somewhere in the cloud and returns it to the user, but thats not a solution i prefer to resort to just yet.
For context, i'm mostly using fabric.Canvas and fabric.StaticCanvas to initialize canvases in my app.
Any insights/help with this would be great.
i wrote the filtering backend for fabricJS, with Mr. Scott Seaward (credits to him too), and i can give you some answers.
Hard block to 2048
A lot of macbook with intel integrated only videocard report a max texture size of 4096, but then they crash the webgl instance at anything higher of 2280. This was happening widely in 2017 when the webgl filtering was written. 4096 would have left uncovered by default a LOT of notebooks. Do not forget mobile phones too.
You know your userbase, you can up the limit to what your video card allows and what canvas allows in your browser. The final image, for how big the texture can be, must be copied in a canvas and displayed. ( canvas has a different max size depending on browser and device )
Render blocking applyFilters() method
Webgl is sync for what i understood.
Creating a parallel executing in a thread for filtering operations that are in the order of 20-30 ms ( sometimes just a couple of ms in chrome ) seems excessive.
Also consider that i tried it but when more than 4 webgl context were open in firefox, some would have been dropped. So i decided for one at time.
The non webgl filtering take longer of course, that could be done probably in a separate thread, but fabricJS is a generic library that does both vectors and filterings and serialization, it has already lot of things on the plate, filtering performances are not that bad. But i'm open to argue around it.
Chunking
Shutterstock editor uses fabricJS and is the main reason why a webgl backend was written. The editor has also chunking and can filter with tiles of 2048 pixels bigger images. We did not release that as opensource and i do not plan of asking. That kind of tiling limit the kind of filters you can write because the code has knowledge of a limited portion of the image at time, even just blurring becomes complicated.
Here there is a description of the process of tiling, is written for casual reader and not only software engineers, is just a blog post.
https://tech.shutterstock.com/2019/04/30/canvas-webgl-filtering-concepts
Generic render blocking consideration
So fabricJS has some pre-written filters made with shaders.
The timing i note here are from my memory and not reverified
The time that pass away filtering an image is:
Uploading the image in the GPU ( i do not know how many ms )
Compiling the shader ( up to 40 ms, depends )
Running the shader ( like 2 ms )
Downloading the result on the GPU ( like 0ms or 13 depends on what method is using )
Now the first time you run a filter on a single image:
The image gets uploaded
Filter compiled
Shader Run
Result downloaded
The second time you do this:
Shader Run
Result downloaded
When a new filter is added or filter is changed:
New filter compiled
Shader or both shader run
Result downloaded
Most common errors in application building with filtering that i have noticed are:
You forget to remove old filters, leaving them active with a value near 0 that does not produce visual changes, but adds up time
You connect the filter to a slider change event, without throttling, and that depending on the browser/device brings up to 120 filtering operation per second.
Look at the official simple demo:
http://fabricjs.com/image-filters
Use the sliders to filter, apply even more filters, everything seems pretty smooth to me.

GDAL CreateCopy is very slow

I am processing some 4096x4096 JPEG2000 images with OpenCV and my resulting image is missing all the GDAL metadata. I have implemented a copy method in my C++ program to use GDAL CreateCopy to copy the meta data from the original image to the destination. So far I am seeing a copy time over 97 seconds on a basic metadata copy. This is longer than all my image processing put together! I have tried increasing the GDAL cache to 200, and even to 10485760. Setting it at either value makes the time even worse!

Is there a limit of text data that can be stored in an executable binary?

My OS X app features an in-application help system that consists of static strings worth roughly 4 MB of raw text data.
Normally, one would store these help texts in and on access fetch them from a lightweight database (SQLite springs to mind) that comes bundled with the application binary. Instead, for the reason of simplicity I chose to store the help text in a large NSDictionary consisting of many NSString (generated automatically at compile time). Access is reasonably fast and the only "drawback" I can think of is the constant consumption of 4 MB of memory the NSDictionary has even when it's not in use - which is really not an issue with modern day hardware.
My solution is pragmatic, works fine for now, makes a compact app that doesn't spill its internal data on disk and yet it gives me an uneasy feeling.
So, I think my question is if what I'm doing is okay or if it is bad practice in any way. Concise:
Is it, from a technical point of view, okay to "bake in" large amounts of text into an application binary?
Is there a size limit of static variable data that can be stored in (64 bit) Darwin Mach-O images?
and when you find a typografically error, you have to compile the app and deploy it entirely, instead of just provide an update to the database. This makes the deployment for the customer more smoothly.
And when it happens, that your app is so demanded, that you want to provide a (f.e.) german language version, you have to change everything from scratch.
as a rule of thump: small binary, large database, assets separately.

Is there PNG compression in ImageResizer like tinypng.org?

I have this project that displays a big list of transparent pngs. I use Cloudfront and ImageResizer to serve up my images:
media.mysite.com/Images/imageA.png;w=170
Now here is my pickle.
imageA.png is ~220kb --> After tinypng.org --> ~87kb / ~62% reduction
I have access to png "compression" libraries like that, but the problem is that the ImageResizer bumps the size back up to full (without compression)
imageA.png;w=170 is ~90kb --> After tinypng.org --> ~20kb / ~62% reduction
So even if I run my imageA.png through tinypng.org, then apply the ImageResizer resizing, I loose the png compression. And in the end this makes my page heavy.
(I do think about a workaround to load my images asynchrounously.)
So I'm wondering is ImageResizer can do this png compression "on the fly"?
I'm reading about http://imageresizing.net/plugins/prettygifs but I'm not sure I understand how it works.
Thanks!
This appears to be a duplicate of PNG Compression and ImageResizer.
To sum up - PNG encoding is absolutely nothing like Jpeg encoding.
Fully optimizing the encoding of a PNG file takes lots of time and memory. You can't resize an image without decoding and re-encoding it. ImageResizer prefers to get the image back to you fast (~100-250ms) with a larger file size, instead of taking 5-15 seconds and using all of your server's RAM to generate a smaller PNG. Consider how long tinypng.org takes to optimize your image. Would you be OK with that kind of delay, in real time, on your website?
ImageResizer offers WebP, which can give you a better speed/size balance than PNG.
Basically, you won't get much file size benefit out of using ImageResizer on pre-optimized PNG sprites.
This plugin works great for me
https://github.com/svenrog/ImageResizer.Plugins.PngOptimizer
Just build it (I couldn't find it on nuget) and add the following to your <plugins> section of web.config
<add name="PngOptimizer" />
Then add &optimized=1 to your URL parameters.
Make sure you are using some kind of cache - Amazon cloudfront works the best because you can point it at your website and serve any file cached.
Important: There's a typo in the example and it should be optimized=1 and not optimize=1

Is there anyway to load a bunch of images from Resources?

Hey guys, im getting a little problem here. I am getting a Level 1 Warning from my debug.
So, i think the best solution for it is loading every images of resource, to prevent crash the app like it are doing.
So, whats the best way to do it?
Thanks!
There are several strategies to reduce memory usage if you're working with lots of images. The warning you're getting doesn't necessarily mean that main memory is running out, but you could be running low on video ram.
Reduce the image size before adding them to your project, both by scaling them down and/or compressing image data.
Load only the images you need at a particular time - avoid trying to keep all images in memory.
Load images using (UIImage)imageWithContentsOfFile: rather than (UIImage)imageNamed: (the latter reads an caches images immediately, the former is more "lazy").
Be aggressive in 'release'ing images - don't wait for autorelease to kick in, but send [image release];
Simplify other aspects of your code that use video ram (e.g. remove layer transparency, remove views that aren't currently visible, etc.)
Your question needs more details I think to get a clear answer.
Anyway, the best solution not to get a memory warning while loading "a bunch of images"...
is...
Not loading these images! Only load required images "on demand".
For example if you have images in a table view, make cells loads images only when they are visible.