How to determine maximum scale for cytoscape.js png output? - cytoscape.js

I want users to be able to generate high quality images for publication from their cytoscape.js graphs (in fact, this is the major use case of my application). Since there's no export from cytoscape.js to a vector format, my documentation tells users to just zoom to a high level of resolution before exporting to png, and then my code passes in the current zoom level as the scale parameter to cy.png(), along with full:true. However, at sufficiently high values for scale, no image gets generated. This maximum scale value seems to be different for different graphs, so I assume it's based on some maximum physical dimensions. How can I determine what the maximum scale value should be for any given graph, so my code won't exceed it?
Also, if I have deleted or moved elements such that the graph takes up less physical space than it used to, cy.png() seems to include that blank area in the resulting image even though it no longer contains any elements -- is there any way to avoid that?

By default, cy.png() and cy.jpg() will export the current viewport to an image. To export the fitted graph, use cy.png({ full: true }) or cy.jpg({ full: true }).
The browser enforces exported overall file size limits. This depends on the OS and the browser vendor. Cytoscape.js can not influence nor can it calculate this limit. For large resolutions/scales, the quality difference between PNG and JPG is negligible. So, JPG tends to be a better option for large exports.
Aside: While bitmap formats scale with resolution, vector formats like (SVG) scale with the number of graph elements. For large graphs, it may be impossible to export SVG (even if such a feature were implemented).

Related

Should the size of the photos be the same for deep learning?

I have lots of image (about 40 GB).
My images are small but they don't have same size.
My images aren't from natural things because I made them from a signal so all pixels are important and I can't crop or delete any pixel.
Is it possible to use deep learning for this kind of images with different shapes?
All pixels are important, please take this into consideration.
I want a model which does not depend on a fixed size input image. Is it possible?
Without knowing what you're trying to learn from the data, it's tough to give a definitive answer:
You could pad all the data at the beginning (or end) of the signal so
they're all the same size. This allows you to keep all the important
pixels, but adds irrelevant information to the image that the network
will most likely ignore.
I've also had good luck with activations where you take a pretrained
network and pull features from the image at a certain part of the
network regardless of size (as long as it's larger than the network
input size). Then run through a classifier.
https://www.mathworks.com/help/deeplearning/ref/activations.html#d117e95083
Or you could window your data, and only process smaller chunks at one
time.
https://www.mathworks.com/help/audio/examples/cocktail-party-source-separation-using-deep-learning-networks.html

It is possible to rasterize sns.distplot(,rug=true)?

In Matplotlib it is possible to plot a very long array A using rasterize=True, as in the following:
plt.plot(A, rasterise=True)
This typically lowers the memory usage.
It is possible to do the same when drawing a rugplot on the support axis in Seaborn's sns.distplot(see http://seaborn.pydata.org/generated/seaborn.distplot.html)? In fact, such a rugplot can consist of many points and consume lot of memory, too.
EDIT:
As noticed in the answer below, this does not lower memory RAM consumption, but we saving the plot on file in pdf format, can alter (i.e., decrease or even increase, under certain circumstances) the dimension of the file on disk.
Seaborn distplot, like many other seaborn plots, allows to pass keyword arguments to the underlying matplotlib functions.
In this case, distplot has a keyword argument rug_kws, which accepts a dictionary of keyword arguments to be passed to the rugplot. Those are again transfered to the underlying matplotlib axvline function.
As such, you can easily provide rasterized=True to axvline via
ax = sns.distplot(x, rug=True, hist=False, rug_kws=dict(rasterized=True))
However, I'm not sure if this has the desired effect of lowering memory consumption. In general, rasterization is applied when saving the figure, so the plot shown on the screen will not be affected at all.
During the process of saving, the rasterization has to be applied, which takes more time and memory than without rasterization.
While bitmap files like png are completely rasterized anyways and will not be affected at all, the generated vector files (like pdf, eps or svg) may even have a larger filesize compared to their unrasterized counterparts.
Rasterization will then only pay off when actually opening such a file (e.g. pdf in a viewer) or processing it (e.g. in latex) where having the rasterized part consumes much less memory and allowing for faster rendering on the screen or printing.

How to configure imageresizer (imageresizing.net) to limit the number of scaled images

In the common responsive web page scenario, the browser requests images at a size determined by the current browserwindow size, so the size requests for an image will be like:
image740?height=731
image740?height=911
image740?width=402
image740?width=403
image740?width=2203
To avoid caching of all those highly specific image sizes and to enhance cache utilization, I would like to set some predefined sizes that are created on the server size. So for instance all image requests between height 600 and 1200 would deliver an image with height 1200.
Q: Is it possible to configure imageresizer doing this?
Q: Is enhancing the SizeLimiting plugin is a good place to implement this?
The Presets plugin lets you define, well, presets, and use those exclusively.
The better solution, however, is to fix your client-side javascript to use intervals instead of the exact browser size. Slimmage.js does this by dividing the pixel count by a factor, rounding up, then multiplying by the same factor. 160 is a good factor that generates ~13 sizes under 2048.

Improving Speed of Histogram Back Projection

I am currently using OpenCV's built-in patch-based histogram back projection (cv::calcBackProjectPatch()) to identify regions of a target material in an image. With an image resolution of 640 x 480 and a window size of 10 x 10, processing a single image requires ~1200 ms. While the results are great, this far too slow for a real-time application (which should have a processing time of no more than ~100 ms).
I have already tried reducing the window size and switching from CV_COMP_CORREL to CV_COMP_INTERSECT to speed up the processing, but have not seen any appreciable speed up. This may be explained by the OpenCV documentation (emphasis mine):
Each new image is measured and then
converted into an image image array
over a chosen ROI. Histograms are
taken from this image image in an area
covered by a “patch” with an anchor at
center as shown in the picture below.
The histogram is normalized using the
parameter norm_factor so that it may
be compared with hist. The calculated
histogram is compared to the model
histogram; hist uses The function
cvCompareHist() with the comparison
method=method). The resulting
output is placed at the location
corresponding to the patch anchor in
the probability image dst. This
process is repeated as the patch is
slid over the ROI. Iterative histogram
update by subtracting trailing pixels
covered by the patch and adding newly
covered pixels to the histogram can
save a lot of operations, though it is
not implemented yet.
This leaves me with a few questions:
Is there another library that supports iterative histogram updates?
How significant of a speed-up should I expect from using an iterative update?
Are there any other techniques for speeding up this type of operation?
As mentioned in OpenCV Integral Histograms will definitely improve speed.
Please take a look at a sample implementation in the following link
http://smsoftdev-solutions.blogspot.com/2009/08/integral-histogram-for-fast-calculation.html

Streaming Jpeg Resizer

Does anyone know of any code that does streaming Jpeg resizing. What I mean by this is reading a chunk of an image (depending on the original source and destination size this would obviously vary), and resizing it, allowing for lower memory consumption when resizing very large jpegs. Obviously this wouldn't work for progressive jpegs (or at least it would become much more complicated), but it should be possible for standard jpegs.
The design of JPEG data allows simple resizing to 1/2, 1/4 or 1/8 size. Other variations are possible. These same size reductions are easy to do on progressive jpegs as well and the quantity of data to parse in a progressive file will be much less if you want a reduced size image. Beyond that, your question is not specific enough to know what you really want to do.
Another simple trick to reduce the data size by 33% is to render the image into a RGB565 bitmap instead of RGB24 (if you don't need the full color space).
I don't know of a library that can do this off the shelf, but it's certainly possible.
Lets say your JPEG is using 8x8 pixel MCUs (the units in which pixels are grouped). Lets also say you are reducing by a factor to 12 to 1. The first output pixel needs to be the average of the 12x12 block of pixels at the top left of the input image. To get to the input pixels with a y coordinate greater than 8, you need to have decoded the start of the second row of MCUs. You can't really get to decode those pixels before decoding the whole of the first row of MCUs. In practice, that probably means you'll need to store two rows of decoded MCUs. Still, for a 12000x12000 pixel image (roughly 150 mega pixels) you'd reduce the memory requirements by a factor of 12000/16 = 750. That should be enough for a PC. If you're looking at embedded use, you could horizontally resize the rows of MCUs as you read them, reducing the memory requirements by another factor of 12, at the cost of a little more code complexity.
I'd find a simple jpeg decoder library like Tiny Jpeg Decoder and look at the main loop in the jpeg decode function. In the case of Tiny Jpeg Decoder, the main loop calls decode_MCU, Modify from there. :-)
You've got a bunch of fiddly work to do to make the code work for non 8x8 MCUs and a load more if you want to reduce by a none integer factor. Sounds like fun though. Good luck.