Blend pixels on .net core - asp.net-core

I am trying to overlay a part of one image on top of another image on .net core (code needs to be cross platform).
I considered using ImageSharp since it supports win,mac and linux.
But i couldn't find pixel blending on their features list, although i saw that you can access an individual pixel.
So the use case would be, i have two 4k Png images and i want a small part of the first image (roughly 10% square of the overall image) to be overlayed on top of the second image (but not the whole image just the same 10% space) and get the area where the merging happened as a new Jpeg image.
(the source PNGs have some degree of transparancy).
I considered cropping out the two parts i want to merge from the two 4k images and then blending them to get the final image, but that is slow for the needs of the project I'm working on.

ImageSharp does support pixel blending, you can specify the pixel blending mode during Draw/Fill operations by passing in an GraphicsOptions parameter and setting its BlenderMode and BlendPercentage(defaults to 100%) properties.
Currently ImageSharp has implementations for the following blending modes:
Normal
Multiply
Add
Substract
Screen
Darken
Lighten
Overlay
HardLight
Src
Atop
Over
In
Out
Dest
DestAtop
DestOver
DestIn
DestOut
Clear
Xor

Related

Rendering small text with Vulkan?

A font rendering library (like say freetype) provides a function that will take an outline font file (like a .ttf) and a character code and produce a bitmap of the corresponding glyph in host memory.
For small text (like say up to 30x30 pixel glyphs) what's the most efficient way to render those glyphs to a Vulkan framebuffer?
Some options I've though about might be:
Render the glyphs with the font rendering library every time on demand, blit them with host code to a single host-side image holding a whole "text box", transfer the host-side image of the text box to a device local image, and then render a quad (like a normal image) using fragment shader / image sampler from the text box to be drawn.
At program startup cycle through all the glyphs host side, render them to glyph bitmaps. Do the same as 1 but blit from the cached glyph bitmaps (takes about 1 MB host memory).
Cache the glyph bitmaps individually into device local images. Rather than bitting host-side, render a quad for each glyph device-side and set the image sampler to the corresponding glyph each time. (Not sure how the draw calls would work? One draw call per glyph with a different combined image sampler every time?)
Cache all the glyph bitmaps into one large device-side image (layed out in a big grid say). Use a single device-side combined image sampler, and push params to describe the subregion that contains the glyph image. One draw call per glyph, updating push params each time.
Like 4 but use a single instanced draw call, and rather than push params use instance-varying input attributes.
Something else?
I mean like, how do common game engines like Unreal or Unity or Godot etc solve this problem? Is there a typical approach or best practice?
First, some considerations:
Rasterizing a glyph at around 30px with freetype might take on the order of 10μs. This is a very small one-time cost, but rendering e.g. 100 glyphs every frame would seriously eat into your frame budget (if we assume the math is as simple as 100 * 10μs == 1ms).
State changes (like descriptor updates) are relatively expensive. Changing the bound descriptor for each character you render has non-negligible cost. This could be limited by batching character draws (draw all the As, then the Bs, etc), but using push constants is typically the fastest.
Instanced drawing with small meshes (such as quads or single triangles) can be very slow on some GPUs, as they will not schedule multiple instances on a single wavefront/warp. If you're rendering a quad with 6 vertices, and a single execution unit can process 64 vertices, you may end up wasting 58/64 = 90.6% of available vertex shading capacity.
This suggests 4 is your best option (although 5 is likely comparable); you can further optimize that approach by caching the results of the draw calls. Imagine you have some menu text:
The first frame it is needed, render all the text to an intermediate image.
Each frame it is needed, make a single draw call textured with the intermediate image. (You could also blit the text if you don't need transparency.)

XCode Coordinates for iPad Retina Displays

I just noticed an interesting thing while attempting to update my app for the new iPad Retina display, every coordinate in Interface Builder is still based on the original 1024x768 resolution.
What I mean by this is that if I have a 2048x1536 image to have it fit the entire screen on the display I need to set it's size to 1024x768 and not 2048x1536.
I am just curious is this intentional? Can I switch the coordinate system in Interface Builder to be specific for Retina? It is a little annoying since some of my graphics are not exactly 2x in either width or height from their originals. I can't seem to set 1/2 coordinate numbers such as 1.5 it can either be 1 or 2 inside of Interface Builder.
Should I just do my interface design in code at this point and forget interface builder? Keep my graphics exactly 2x in both directions? Or just live with it?
The interface on iOS is based on points, not pixels. The images HAVE to be 2x the size of the originals.
Points Versus Pixels In iOS there is a distinction between the coordinates you specify in your drawing code and the pixels of the
underlying device. When using native drawing technologies such as
Quartz, UIKit, and Core Animation, you specify coordinate values using
a logical coordinate space, which measures distances in points. This
logical coordinate system is decoupled from the device coordinate
space used by the system frameworks to manage the pixels on the
screen. The system automatically maps points in the logical coordinate
space to pixels in the device coordinate space, but this mapping is
not always one-to-one. This behavior leads to an important fact that
you should always remember:
One point does not necessarily correspond to one pixel on the screen.
The purpose of using points (and the logical coordinate system) is to
provide a consistent size of output that is device independent. The
actual size of a point is irrelevant. The goal of points is to provide
a relatively consistent scale that you can use in your code to specify
the size and position of views and rendered content. How points are
actually mapped to pixels is a detail that is handled by the system
frameworks. For example, on a device with a high-resolution screen, a
line that is one point wide may actually result in a line that is two
pixels wide on the screen. The result is that if you draw the same
content on two similar devices, with only one of them having a
high-resolution screen, the content appears to be about the same size
on both devices.
In your own drawing code, you use points most of the time, but there
are times when you might need to know how points are mapped to pixels.
For example, on a high-resolution screen, you might want to use the
extra pixels to provide extra detail in your content, or you might
simply want to adjust the position or size of content in subtle ways.
In iOS 4 and later, the UIScreen, UIView, UIImage, and CALayer classes
expose a scale factor that tells you the relationship between points
and pixels for that particular object. Before iOS 4, this scale factor
was assumed to be 1.0, but in iOS 4 and later it may be either 1.0 or
2.0, depending on the resolution of the underlying device. In the future, other scale factors may also be possible.
From http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html
This is intentional on Apple's part, to make your code relatively independent of the actual screen resolution when positioning controls and text. However, as you've noted, it can make displaying graphics at max resolution for the device a bit more complicated.
For iPhone, the screen is always 480 x 320 points. For iPad, it's 1024 x 768. If your graphics are properly scaled for the device, the impact is not difficult to deal with in code. I'm not a graphic designer, and it's proven a bit challenging to me to have to provide multiple sets of icons, launch images, etc. to account for hi-res.
Apple has naming standards for some image types that minimize the impact on your code:
https://developer.apple.com/library/ios/#DOCUMENTATION/UserExperience/Conceptual/MobileHIG/IconsImages/IconsImages.html
That doesn't help you when you're dealing with custom graphics inline, however.

Programmatically, how does hue blending work in photoshop?

In Photoshop you can set a layer's blending mode to be "Hue". If that layer is, for example, filled with blue then it seems to take the layer below and makes it all blue wherever a non-whiteish color exists.
I'm wondering what it's actually doing though. If I have a background layer with a pixel aarrggbb and the layer on top of that is set to blend mode "Hue" and there's a pixel aarrggbb on that layer, how are those two values combined to give the result that we see?
It doesn't just drop the rrggbb from the layer below. If it did that it'd color white and black as well. It also wouldn't allow color variations through.
If a background pixel is 0xff00ff00 and the corresponding hue layer pixel is 0xff0000ff then I'm assuming the end result will just be 0xff0000ff because the ff blue replaces the ff green. But, if the background pixel is 0x55112233 and the hue layer pixel is 0xff0000ff, how does it come up with the shade of blue that it comes up with?
The reason I ask is that I'd like to take various images and change the hue of the image programmatically in my app. Rather than storing 8 different versions of the same image with different colors, I'd like to store one image and color it as needed.
I've been researching a way to replicate that blending mode in javascript/canvas but I've only come up with the "colorize" filter/blend mode. (Examples below)
Colorize algorithm:
convert the colors from RGB to HSL;
change the Hue value to the wanted one (in my case 172⁰ or 0.477);
revert the update HSL to RGB
Note: this is ok on the desktop but it's noticeably slow on a smartphone, I found.
You can see the difference by comparing these three images. Original:
colorize:
Fireworks' "blend hue" algorithm (which I think is the same as Photoshop's):
The colorize filter might be a good substitute.
RGB/HSL conversion question
Hue/Chroma and HSL on Wikipedia
I found an algorithm to convert RGB to HSV here:
http://www.cs.rit.edu/~ncs/color/t_convert.html
Of course, at the bottom of that page it mentions that the Java Color object already has methods for converting between RGB and HSV, so I just used that.

About animating frame by frame with sprite files

I used to animate my CCSprites by iterating through 30 image files (rather big ones) and on each file I changed the CCSprite's texture to that image file.
Someone told me that was not efficient and I should use spritesheets instead. But, can I ask why is this not efficient exactly?
There are two parts to this question:
Memory.
OpenGL ES requires textures to have width and height's to the power of 2 eg 64x128, 256x1024, 512x512 etc. If the images don't comply, Cocos2D will automatically resize your image to fit the dimensions by adding in extra transparent space. With successive images being loaded in, you are constantly wasting more and more space. By using a sprite sheet, you already have all the images tightly packed in to reduce wastage.
Speed. Related to above, it takes time to load an image and resize it. By only calling the 'load' once, you speed the entire process up.

resolution from a PDFPage?

I have a PDF document that is created by creating NSImages with size in 72dpi pts, each has a single representation which is measured in pixels. I then put these images into PDFPages with initWithImage, and then save the document.
When I open the document, I need the resolution of the original image. However, all of the rectangles that PDFPage gives me are measured in points, not pixels.
I know that the information is in there, and I suppose I can try to parse the PDF data myself, by going through the voyeur.app example... but that's a WHOLE lot of effort to do something that should be pretty normal...
Is there an easier way to do this?
Added:
I've tried two techniques:
get the PDFRepresentation data from
the page, and use it to make a new
NSImage via initWithData. This
works, however, the image has both
size and pixel size in 72dpi.
Draw the PDFPage into a new
off-screen context, and then get a
CGImage from that. The problem is
that when I'm making the context, it
appears that I need to know the size
in pixels already, which defeats
part of the purpose...
There are a few things you need to understand about PDF:
The PDF Coordinate system is in
points (1/72 inch) by default.
The PDF Coordinate system is devoid of resolution. (this is a white lie - the resolution is effectively the limits of 32 bit floating point numbers).
Images in PDF do not inherently have any resolution attached to them (this is a white lie - images compressed with JPEG2000 still have resolution in their embedded metadata).
An Image in PDF is represented by an object that contains a series of samples that are stored using some compression filter.
Image objects can be rendered on a page multiple times at any size.
Since resolution is defined as the number of pixels (or samples) per unit distance, resolution only means something for a particular rendering of an image on a page. So if you are rendering a particular image to fill the page, then the resolution in dpi is
xdpi = image_width / (pageWidthInPoints / 72.0);
ydpi = image_height / (pageHeightInPoints / 72.0);
If the image is not being rendered to the full size of the page, a complete solution is very tricky. Adobe prescribes that images should be treated as being 1x1 and that you change the page transformation matrix to determine how to render them. The means that you would need the matrix at the point of rendering the image and you would need to push the points (0,0), (0, 1), (1,0) through the matrix. The Euclidean distance between (0, 0)' and (1, 0)' will give you the width in points and the Euclidean distance between (0, 0)' and (0, 1)' will give you the height in points.
So how do you get that matrix? Well, you need the content stream for the page and you need to write a PDF interpreter that can rip the content stream and keep track of changes to the CTM. When you reach your image, you extract the CTM for it.
To do that last step should be about an hour with a decent PDF toolkit, provided you are familiar with the toolkit. Writing that toolkit is several person years of work.