PVR texture format: MGLPT_PVRTC4 vs OGL_PVRTC4 - opengl-es-2.0

PVR texture format from Imaginations.
Defined in PVRTexLibGlobals.h
What is difference with texture formats?
MGLPT_PVRTC4 vs OGL_PVRTC4
I have used OGL_PVRTC4 before. Does MGLPT_PVRTC4 is exactly same?
My code broke when some tool gave MGLPT_PVRTC4 texture. I am wondering how should we process MGLPT_PVRTC4 textures.

The best place to ask would be at the PowerVR Insider forum (http://forum.imgtec.com/categories/powervr-graphics). FWIW, MGL is an old API that pre-dates OpenGL ES 1.0 (and the latter recently turned 10).
It is extremely likely that the ordering of the contents of the file will have changed between the two.

Related

How do I convert OpenGLES shaders to Metal compatible ones?

I have a project which uses about 2 dozen .vsh and .fsh files to draw 2D tiles using OpenGLES. Since that is deprecated, I want to convert my project to Metal. My head is now swimming with vocabulary and techniques involved in both systems - graphics is not my forte.
Can I use OpenGLES to compile the .vsh/.fsh files, and then save them in a metal-compatible format? The goal would be to then use the saved information in a metal-centric world and remove all the OpenGLES code from the project. I've spent a few days on this already, and yet I don't understand the processes enough to fully attempt the transition to Metal. Any/all help is appreciated.
I saw this: "On devices that support it, the GLSL code you provide to SKShader is automatically converted to Metal shading language and run on a Metal renderer", which leads me to believe there is a way to get this done. I just don't know where to begin. OpenGL ES deprecated in iOS 12 and SKShader
I have seen this:
Convert OpenGL shader to Metal (Swift) to be used in CIFilter, and if it answers my question, I don't understand how.
I don't think this answers it either: OpenGL ES and OpenGL compatible shaders
Answers/techniques can use either Objective-C or Swift - the existing code is Objective-C, the rest of the project has been converted to Swift 5.
There are many ways to do what you want:
1) You can use MoltenGL to seamlessly convert your GLSL shaders to MSL.
2) You can use open-source shader cross-compilers like: krafix, pmfx-shader, etc.
I would like to point out that based on my experience it would be better in terms of performance that you try to rewrite the shaders yourself.

Why does coregraphics support PDF and why is the format so pivotal to Apple?

There are so many formats that intuitively are more bound to graphics: it could be vector graphics format such as SVG, various PNG, JPEG formats etc. Why PDF of them all has a dedicated library?
Note: this question is NOT opinion-based. I'm interested in the very reason for why PDF has such a central point in xOS. Give it a day to see.
My bonafides: I licensed my PDF viewer to Apple for Rhapsody and then the PDF library I wrote in Objective-C to Apple as the basis of CoreGraphics. I’ve written code for NeXT systems since 1988, and worked as a contractor for NeXT and Apple many times.
As Rob says, NeXTstep used Display Postscript (“DPS”), under license from Adobe. When Apple switched from “Classic” Mac OS to OS X, they absolutely didn’t want to keep paying Adobe the per-seat license for DPS each Mac it sold – it had been years since they’d agreed to pay a per-seat license to anyone for the Mac.
And, at the same time, DPS didn’t reflect modern graphics trends. It was a Turing-complete language and ran in its own process, and client apps would have to send inter-process messages to it, which meant expensive context switches and no direct sharing of buffers. (Also it broke down isolation between client processes.)
So the graphics team (led then by Peter Graffagnino) needed to replace DPS, but also not break all the existing NeXTstep apps. As it happened an engineer at Adobe (I believe it was originally Sam Streeper) had written a graphics language that was MUCH simpler than PostScript and wasn’t Turing complete, but still had most of the drawing conventions of PostScript. (Display PostScript was superset of PostScript, so any PostScript program runs in DPS.)
PDF was originally designed for PostScript-driven printers, because lots of PostScript programs (aka “print jobs”) ended up slurping down memory and processor time and often were just too big to work on any individual printer (RAM was expensive back then). Converting PostScript files to PDF made them process far more easily, and since PDF doesn’t have control flow you couldn’t, say, send the printer into an infinite loop. (Adobe processed PDF with a PostScript program they’d send to the printer as well.)
PDF was also an open standard, so with Apple writing their own implementation they didn’t have to pay Adobe anything to use it. PDF was fast and simple and free and had the exact same model because it was designed to run on PostScript printers.
So: Apple had to switch to a new graphics language for Mac OS X (née NeXTstep), and wanted one with the same semantics and Display PostScript so all their existing apps largely still worked.
So first they wrote the functional calls to CoreGraphics, implementing all the conventions of PostScript / PDF (eg: how clip paths work, how paths are filled if they self-overlap, resolution-independence, the state stack, etc). They rewrote all the existing high-level graphics functions (eg, NSRectFill()) so they were on top of CoreGraphics, so 99% of existing NeXTstep programs could switch from DPS to CG without modification. (By the late 1990s most NeXTstep programmers had realized dipping into DPS and trying to create effects by writing PostScript was pretty much always a bad idea, so we used the high-level calls.)
But the final missing piece was giving programs a format they could read and write to disk. In NeXTstep days we used both PostScript and Encapsulated PostScript, but the former was Right Out now. PDF was a natural fit (for the reasons outlined above) so they added reading and writing of PDF files.
Postscript (pun intended): The PDF code in CoreGraphics today is not what I licensed Apple — they used my code as more of a “working reference” and came up with their own APIs, since CoreGraphics couldn’t use Objective-C and my API was never really intended to be used by general programs.
The drawing model utilized by Quartz 2D is based on PDF specification.
It is widely stated that Quartz "uses PDF internally" (notably by Apple in their 2000 Macworld presentation and Quartz's early developer documentation), ... Quartz's internal imaging model correlates well with the PDF object graph, making it easy to output PDF to multiple devices.
https://en.wikipedia.org/wiki/Quartz_(graphics_layer)#Use_of_PDF
https://developer.apple.com/library/content/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_overview/dq_overview.html
Continuing Mahal Tertin's points, focusing on the drawing side, which isn't really PDF-centric but the history is interesting anyway (I'll get to the file format side below).
Quartz was developed before 1999, and was a fairly natural continuation of NeXT's use of Display Postscript and QuickDraw, which conceptually goes back to the Lisa and was used extensively on pre-OS X Macs.
SVG might have been a reasonable base, but work on it didn't even start until 1999. It was never in a position to the basis for OS X drawing. Even if it had come out a couple of years earlier, it would still have been an untested experimental format vs a established format that Apple already had extensive code and experience for.
When iOS came along, it inherited Core Graphics, which made development dramatically easier for OS X developers. Converting everything to an SVG DOM or the like would have made iOS much harder to migrate to. (I had complex custom views on OS X that I ported verbatim to iOS with the addition of one flip translation and a hacky #define to replace NSColor with UIColor.) It also made developing iOS much easier in the first place since Apple could reuse substantial existing code. Why replace the entire drawing system with another format? To what purpose?
Quartz/PDF/Postscript all have the concept of independent objects/layers that are composited together, and has lots of support for text and lines. This makes a lot of sense when you're building a windowing system and want to easily drag a window along the screen, overlapping other windows, and minimizing computation. PNG and JPEG are built to compress single, static images. They don't make any sense for this use. JPEG in particular would be a horrible way to try to manage a windowing system full of crisp, straight lines and text, exactly the things that JPEG hates.
So that's the drawing side (which isn't really PDF-centric anyway, but it answers why SVG DOM isn't the heart of Core Graphics), but why does PDF get its own framework and not PNG?
As a graphics format, PNG does get a lot of support in iOS (as opposed to OS X). iOS is highly optimized to display PNG and converting images to and from PNG is very easy in iOS. There really isn't a need for a PNGKit. What would it do beyond what's built into UIKit, where PNG and JPEG are given their own special handling right in UIImage (handling PDF doesn't get)?
PDFKit exists because PDF is complicated, and it addresses printing problems that the other formats don't (things like handling multiple pages; when was the last time someone mailed you an SVG Print document?) While it may seem that it's getting a lot of special support by getting its own framework, PDFKit isn't really very good and doesn't support a lot of more complicated PDF features. So while there is a long history of PDF inside of Core Graphics, PDF as a file format is actually pretty poorly supported by Apple IMO. (Compare PSPDFKit, which much more powerful, and a necessary addition for almost any app that uses PDF in a non-trivial way.)

Pixel Shader performance in Directx9 not equivalent to Opengl ES2.0

I am using a drop effect by the help of pixel shader in directx-9 to be specific SlimDX.Direct3D9 written in hlsl used for transition between two images. I have written the same pixel shader in glsl language to be used in an android project using java 6.0.
The issue here is with the performance difference in both the machines. Android machines is showing smooth transition but there is a visible pixelation in Windows machines during transition. Pixel shader 2.0 version is being used in directx project
I think a couple of pictures would help immensely.
It could be a difference in sampling coordinates. Make sure you are getting 1:1 texture/pixel mapping.
Another possibility could be that the filtering is set to point instead of linear.

Check Device Type For GPUImage

GPUImage requires, for iPhone 4 and below, images smaller than 2048 pixels. The 4S and above can handle much larger. How can I check to see which device my app is currently running on? I haven't found anything in UIDevice that does what I'm looking for. Any suggestions/workarounds?
For this, you don't need to check device type, you simply need to read the maximum texture size supported by the device. Luckily, there is a built-in method within GPUImage that does this for you:
GLint maxTextureSize = [GPUImageContext maximumTextureSizeForThisDevice];
The above will give you the maximum texture size for the device you're running this on. That will determine the largest image size that GPUImage can work with on that device, and should be future-proof against whatever iOS devices come next.
This method works by caching the results of this OpenGL ES query:
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize);
if you're curious.
I should also note that you can provide images larger than the max texture size to the framework, but they get scaled down to the largest size supported by the GPU before processing. At some point, I may complete my plan for tiling subsections of these images in processing so that larger images can be supported natively. That's a ways off, though.
This is among the best device-detection libraries I've come across: https://github.com/erica/uidevice-extension
EDIT: Although the readme seems to suggest that the more up-to-date versions are in her "Cookbook" sources. Perhaps this one is more current.
Here is a useful class that I have used several times in the past that is very simple and easy to implement,
https://gist.github.com/Jaybles/1323251

Best approach for music visualization/interaction app

I'm am an experienced flash developer who's been learning objective-c for the last 5 months.
I am beginning the development of an app previously prototyped in Flash and I'm trying to guess what could be the best approach to port it to iOS.
My app is kind of a music game. It consists of some dynamic graphics (circles growing and rotating), with typography also changing and rotating. Everything moves in sync with music. And at the same time the user can interact with the app (moving and rotating things) and some sounds will change depending on his actions.
Graphics can't be bitmaps because they get redrawn every frame.
This was easy to develop with Flash due to its management of vector graphics. But I'm not sure what would be the best way to do it in objective-c.
My options, I guess are things like: Core Graphics, OpenGL, maybe Cocos2D (not sure if that would be to kill a flea with a sledgehammer). Or even things like OpenFrameworks or Cinder, but I rather use objective-c other than c++.
Any hint on where to look at will be appreciated.
EDIT:
I can't really put a real screenshot due to confidentiality issues. But it is something similar to this
But it will be interactive and sections would change size and disappear depending on the music and user interaction.
Which graphics library should you use? The answer is going to depend a lot on what you know or could learn. OpenGL will use hardware acceleration, so it's probably fastest. But OpenGL doesn't have built-in functions for drawing arc segments or any curves or text at all, so you'd probably have to do it yourself. Also, OpenGL is notoriously difficult to learn.
Core Graphics has many cool methods for drawing vector graphics (rectangles, arcs, general paths, etc.), but might be slower than you want, depending on what you're trying to do. Without having code to actually run it's hard to say.
It looks like Cocos2D is built on OpenGL and is made to be simple. I see lots of mention of sprites on their website, but nothing about vector graphics. (I've never used it, so it could be there and I'm just not seeing it.)
If I were in your position, I'd look into cocos2d and see if it does vector graphics at all. If not, I might give Core Graphics a try and see what performance was like. I know OpenGL can do what you want, but it can be difficult to learn, so I'd probably do that last.