Pixel Shader performance in Directx9 not equivalent to Opengl ES2.0 - opengl-es-2.0

I am using a drop effect by the help of pixel shader in directx-9 to be specific SlimDX.Direct3D9 written in hlsl used for transition between two images. I have written the same pixel shader in glsl language to be used in an android project using java 6.0.
The issue here is with the performance difference in both the machines. Android machines is showing smooth transition but there is a visible pixelation in Windows machines during transition. Pixel shader 2.0 version is being used in directx project

I think a couple of pictures would help immensely.
It could be a difference in sampling coordinates. Make sure you are getting 1:1 texture/pixel mapping.
Another possibility could be that the filtering is set to point instead of linear.

Related

how to get soft particle effect using direct3d 11 api

I tried all the ways to calculate the particle alpha, and set shaderResource to the draw process, in the renderdoc, the screenDepthTexture is always no Resource.
You’re probably trying to use the same depth buffer texture in two stages of the pipeline at the same time: read in pixel shader to compute softness, and use the same texture as a depth render target in output merger stage.
This is not going to work. When you call OMSetRenderTargets, the pixel shader binding is unset for the resource view of that texture.
An easy workaround is making a copy of your depth texture with CopyResource, and bind the copy to the input of the pixel shader. This way your pixel shader can read from there, while output merger stage uses another copy as depth/stencil target.
In the future, to troubleshoot such issues use D3D11_CREATE_DEVICE_DEBUG flag when creating the device, and read debug output in visual studio. RenderDoc is awesome for higher level bugs, when you’re rendering something but it doesn’t look right. Your mistake is incorrect use of D3D API. That debug layer in Windows SDK is more useful than RenderDoc for such things.

How do I convert OpenGLES shaders to Metal compatible ones?

I have a project which uses about 2 dozen .vsh and .fsh files to draw 2D tiles using OpenGLES. Since that is deprecated, I want to convert my project to Metal. My head is now swimming with vocabulary and techniques involved in both systems - graphics is not my forte.
Can I use OpenGLES to compile the .vsh/.fsh files, and then save them in a metal-compatible format? The goal would be to then use the saved information in a metal-centric world and remove all the OpenGLES code from the project. I've spent a few days on this already, and yet I don't understand the processes enough to fully attempt the transition to Metal. Any/all help is appreciated.
I saw this: "On devices that support it, the GLSL code you provide to SKShader is automatically converted to Metal shading language and run on a Metal renderer", which leads me to believe there is a way to get this done. I just don't know where to begin. OpenGL ES deprecated in iOS 12 and SKShader
I have seen this:
Convert OpenGL shader to Metal (Swift) to be used in CIFilter, and if it answers my question, I don't understand how.
I don't think this answers it either: OpenGL ES and OpenGL compatible shaders
Answers/techniques can use either Objective-C or Swift - the existing code is Objective-C, the rest of the project has been converted to Swift 5.
There are many ways to do what you want:
1) You can use MoltenGL to seamlessly convert your GLSL shaders to MSL.
2) You can use open-source shader cross-compilers like: krafix, pmfx-shader, etc.
I would like to point out that based on my experience it would be better in terms of performance that you try to rewrite the shaders yourself.

PVR texture format: MGLPT_PVRTC4 vs OGL_PVRTC4

PVR texture format from Imaginations.
Defined in PVRTexLibGlobals.h
What is difference with texture formats?
MGLPT_PVRTC4 vs OGL_PVRTC4
I have used OGL_PVRTC4 before. Does MGLPT_PVRTC4 is exactly same?
My code broke when some tool gave MGLPT_PVRTC4 texture. I am wondering how should we process MGLPT_PVRTC4 textures.
The best place to ask would be at the PowerVR Insider forum (http://forum.imgtec.com/categories/powervr-graphics). FWIW, MGL is an old API that pre-dates OpenGL ES 1.0 (and the latter recently turned 10).
It is extremely likely that the ordering of the contents of the file will have changed between the two.

OpenGL power of two texture performance [duplicate]

I am creating an OpenGL video player using Ffmpeg and all my videos aren't power of 2 (as they are normal video resolutions). It runs at fine fps with my nvidia card but I've found that it won't run on older ATI cards because they don't support non-power-of-two textures.
I will only be using this on an Nvidia card so I don't really care about the ATI problem too much but I was wondering how much of a performance boost I'd get if the textuers were power-of-2? Is it worth padding them out?
Also, if it is worth it, how do I go about padding them out to the nearest larger power-of-two?
Writing a video player you should update your texture content using glTexSubImage2D(). This function allows you to supply arbitarily sized images, that will be placed somewhere in the target texture. So you can initialize the texture first with a call of glTexImage() with the data pointer being NULL, then fill in the data.
The performance gain of pure power of 2 textures strongly depends on the hardware used, but in extreme cases it may be up to 300%.

iOS Image Manipulation (Distortion)

I initially approached this issue with CoreImage in mind (because I also need to do facial recognition), but realized that, unfortunately, the CI Distortion filters are not yet included on the iPhone.
I attempted to dive into GLImageProcessing, CImg, and ImageMagick, though I've had a lot of trouble finding a starting point for learning any of these.
Given the number of apps out there that do image distortion, I know this can't be incredibly difficult.
I don't know C or C++, and don't have the time to learn those languages unless absolutely necessary. It would become necessary if one of those libraries is the definitive library for handling this task.
Does anyone have experience with any of these libraries?
Any books out there that cover this for iOS5 specifically?
Resources I've found:
GLImageProcessing sample project
https://developer.apple.com/library/ios/#samplecode/GLImageProcessing/Introduction/Intro.html
ImageMagick & MagickWand
http://www.imagemagick.org/script/magick-wand.php
CImg
http://cimg.sourceforge.net/
Simple iPhone image processing
http://code.google.com/p/simple-iphone-image-processing/
As you say, the current capabilities of Core Image are a little limited on iOS. In particular, the lack of custom kernels like you find on the desktop is disappointing. The other alternatives you list (with the exception of GLImageProcessing, which wouldn't be able to do this kind of filtering) are all CPU-bound libraries and would be much too slow for doing live filtering on a mobile device.
However, I can point you to an open source framework called GPUImage that I just rolled out because I couldn't find something that let you pull off custom effects. As its name indicates, GPUImage does GPU-accelerated processing of still images and video using OpenGL ES 2.0 shaders. You can write your own custom effects using these, so you should be able to do just about anything you can think of. The framework itself is Objective-C, and has a fairly simple interface.
As an example of a distortion filter, the following shader (based on the code in Danny Pflughoeft's answer) does a sort of a fisheye effect:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
const mediump float bulgeFactor = 0.5;
void main()
{
mediump vec2 processedTextureCoordinate = textureCoordinate - vec2(0.5);
mediump float radius = processedTextureCoordinate.x * processedTextureCoordinate.x + processedTextureCoordinate.y * processedTextureCoordinate.y;
mediump vec2 distortedCoordinate = vec2(pow(radius, bulgeFactor)) * processedTextureCoordinate + vec2(0.5);
gl_FragColor = texture2D(inputImageTexture, distortedCoordinate);
}
This produces this kind of effect on a video stream:
In my benchmarks, GPUImage processes images 4X faster than Core Image on an iPhone 4 (6X faster than CPU-bound processing) and video 25X faster than Core Image (70X faster than on the CPU). In even the worst case I could throw at it, it matches Core Image for processing speed.
The framework is still fairly new, so the number of stock filters I have in there right now is low, but I'll be adding a bunch more soon. In the meantime, you can write your own custom distortion shaders to process your images, and the source code for everything is available for you to tweak as needed. My introductory post about it has a little more detail on how to use this in your applications.