Knobs effect in QML - qml

I’m trying to implement a custom widget in QML to obtain an effect such as http://anthonyterrien.com/knob/ [× Angle offset element in the page].
I tried with canvas object but if I apply a scale to the parent object of the canvas the quality of the canvas appear very ugly.
I tried with a glsl frag shader but the variable glFragCoord bind me to the screen coordinates and anyway if I apply a scale to the parent object the rendering appears ugly.
Someone can help me please???

Related

Vulkan Rendering - Portion of Surface

How to render a vulkan framebuffer(vkImage) in a portion of Surface?
When I draw in framebuffer, vulkan clear all surface with vkColorClear.
The surface has 800x600 but I would like vulkan render 300x200 using a offset 100x100, for example.
When you begin a render pass, you provide the VkRenderPassBeginInfo object. In this object is the renderArea rectangle, which defines the area of each of the attachment images that the render pass will affect. Any pixels of attachments outside of this area are unaffected by render pass operations, including the clear load op and vkCmdClearAttachments.
Note that the renderArea is subject to the limitations of the render area granularity, as queried from vkGetRenderAreaGranularity.
You can subset a window by setting the view rectangle and viewport in the VkGraphicsPipelineCreateInfo structure to the subregion you wish to render. You can dynamically configure the viewport at draw time using vkCmdSetViewport().
For VkCmdClearAttachments() you can set the clear area via the pRects argument (it ignores viewport).

Blender border render internals

I would like to know how Blender's border render works internally. How can Blender compute lights if it has not information about the lights in the tiles he won't render? I have not found any reference (source code excluded) on how this feature of blender works. Can somebody explain it (or give me some reference)?
The render border setting only alters what part of the image is rendered, it does not alter what data is sent to the render engine to generate the image.
You can test this by placing an object with a reflective surface in front of the camera and another object behind the camera, the object behind the camera will show in the reflection. The border setting doesn't change the reflection in the object, it only changes what part of the image is rendered.
Rendering an image starts at the pixel that will be visible in the final image and sends a "ray" into the scene to determine what colour the specific pixel will be. Each ray will bounce around in the scene from object to object to light source based on render settings to calculate the final result. While the render border will reduce the pixels used as the starting point for each ray, it does not reduce the objects or lights in the scene that each ray may come into contact with. Each ray going through the scene will see every visible object and light in the scene that can influence the final result for each pixel.
This conference video explains ray types and might give you a better grasp of how a ray goes through a scene to get the final image.

Graphics rendering operation on esri

Does someone knows why graphics objects such as polygon, point, picture marker and etc are rendering
from scratch while zooming or moving esri map?
For example, in the following link the brackets disappears and rendering from the start in each zoom change or map move: example.
Thanks in advance,
Gal
Yes, the graphics redraw from scratch when panning and zooming because a graphics layer has been added to the map control.
GraLay.SelectedFeature = new ags.GraphicsLayer();
map.addLayer(GraLay.SelectedFeature);

OpenGL ES 2.0: attach smaller texture to framebuffer

I have implemented bloom post process effect in my game for Android using render to texture and proper shaders. It works, but the performance hit is unacceptable. So I thought that I could render the scene to smaller texture and then stretch the texture to fullscreen. The trouble is that when I attach a texture that is smaller than the viewport to the off screen framebuffer, the scene is cropped. The image below illustrates the issue:
Is there any way I could "map" the attached texture to the framebuffer somehow, so the whole viewport gets rendered to it? I could probably modify the projection matrix to achieve the goal, but that would complicate my code and I would rather avoid it.
I think you can do that by simply changing the Viewport to match the texture dimensions before you do the render to texture, then set the viewport back to the dimensions of the View before you render to the framebuffer. There should be no significant performance loss because you will be calling glViewport() twice as often.
Your suggestion about scaling the projection matrix should also work.

How to add a shadow to an UIImageView which fits the shape of the image content but with some rotation and shift effect

I have been looking for the solution on the web for a long time. Most tutorials are fairly simple about adding shadow to a UIView. I also noticed that if we add a shadow to an UIImageView. The shadow shape could perfectly fit the shape of the content image if the image itself has alpha channel in it. Say for example, if the image is an animal with transparent background, the shadow shape is also the same as that animal (not a rectangle shadow as same as UIImageView frame).
But these are not enough. What I need to do is to add some changes to the shadow so it may have some rotation angle and compressed (squeezed or shift) effect so that looks like the sunlight comes from a certain spot.
To demonstrate what I need, I upload 2 images below, which I captured from the Google Map App created by Apple. You can imagine the Annotation Pin is an image which has the Pin shape, so the shadow is also "pin shaped", but it is not simply "offset" with a CGSize, you can see the top of the shadow is shifted right about 35 degrees and slightly squeezed the height.
When we tap and hold and pin, the shadow is also animated away from the pin, so I believe that such shadow can be made programmably.
The best shadow tutorial I can found so far is http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths But unfortunately, that cannot make this effect.
If anyone know the answer or know any better words to search for, please let me know. Thank you.
(Please note that the shape of the image is dynamic in the App, so using any tool like Photoshop to pre-render the shadow is not an option.)
In order to create dynamic effects like this, you have to use Core Graphics. It's incredibly powerful once you know how to use it. Basically you need to set a skew transform on the context, set up a shadow and draw the image. You will probably have to use transparency layers as well.
It doesn't sound like you can use CALayer shadows, since that is meant to solve a specific use-case. The approach Apple takes with the pin marks on the map is to have two separate images that are created ahead of time (e.g. in Photoshop) and they position them within the map relative to a reference point.
If you really do need to do this at run-time, it should still be possible by using either Core Graphics or ImageKit. To get a blurred shadow appearance, you can use the kCICategoryBlur CIFilter. You can then convert the image to grayscale. And to get that compressed look you just need to resize and skew the image.
Once you have two separate images, you can either take the CGImageRef for the shadow image and can set that as the content of another sublayer, or you can add it as a separate view.
If you know what all the shapes are, you could just render a shadow image in Photoshop or something.