GPUImage requires, for iPhone 4 and below, images smaller than 2048 pixels. The 4S and above can handle much larger. How can I check to see which device my app is currently running on? I haven't found anything in UIDevice that does what I'm looking for. Any suggestions/workarounds?
For this, you don't need to check device type, you simply need to read the maximum texture size supported by the device. Luckily, there is a built-in method within GPUImage that does this for you:
GLint maxTextureSize = [GPUImageContext maximumTextureSizeForThisDevice];
The above will give you the maximum texture size for the device you're running this on. That will determine the largest image size that GPUImage can work with on that device, and should be future-proof against whatever iOS devices come next.
This method works by caching the results of this OpenGL ES query:
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize);
if you're curious.
I should also note that you can provide images larger than the max texture size to the framework, but they get scaled down to the largest size supported by the GPU before processing. At some point, I may complete my plan for tiling subsections of these images in processing so that larger images can be supported natively. That's a ways off, though.
This is among the best device-detection libraries I've come across: https://github.com/erica/uidevice-extension
EDIT: Although the readme seems to suggest that the more up-to-date versions are in her "Cookbook" sources. Perhaps this one is more current.
Here is a useful class that I have used several times in the past that is very simple and easy to implement,
https://gist.github.com/Jaybles/1323251
Related
Decreasing the playback speed of AudioPlayer severely decreases the quality of the audio being played; the audio becomes very "noisy".
Is there any way to fix this or is it an issue with the just_audio implementation?
Reproduce:
final AudioPlayer player = AudioPlayer(); // Create audio player
player.setAsset("..."); // Load audio file
player.setSpeed(0.5); // Halve speed
player.play(); // Start playing
Just to preface this answer, time stretching is a difficult thing to do in real-time because it has to stretch time without stretching the sound waves (stretching the sound waves would lower the frequency and hence the pitch, so it has to stretch time while filling the gaps with fabricated extensions to the existing sound waves). As a result, the very best real time algorithm will still introduce artifacts and distortions.
Now to answer your question, just_audio doesn't provide any options to change the time stretching algorithm, but it does use the best available algorithms for each platform, for general purpose usage. The Android implementation uses Sonic which is better quality than Android's own built-in algorithm. On iOS/macOS, AVAudioTimePitchAlgorithmTimeDomain is used which seems to produce the least distortion at speeds below 1.0 out of the different algorithms Apple provides, although newer iPhones/iOS versions may produce higher quality output. On web browsers, it uses whatever algorithm that web browser provides.
If you need to try out alternatives, you would need to make a copy of just_audio and edit the code that selects the algorithm. You are unlikely to find better options for Android and web, but you might like to experiment with the different iOS/macOS algorithms by searching for AVAudioTimePitchAlgorithmTimeDomain in the code and changing it to one of the other options listed in Apple's documentation. You may find one of the other algorithms works better if you have a specialised use case.
We have created a simple puzzle game with Unity. The final package apk size is 20 MB. The size of our graphic and sound assets combined is 6 MB. We have already tried to do optimization as we found some tips on the Internet (before it was 28 MB).
The question is for experienced developers and it is very simple:
Please let us know if 20 MB is the smallest size that we can achieve? If not, then please let us know your opinion what can be the smallest size for this kind of game? It has only one level.
The link of a game: https://play.google.com/store/apps/details?id=com.strategeens.kineticpuzzle
Yes it's reasonable. Unity3D has quite a large footprint itself. Depending on the platform it should be even more that 15MB for the engine itself.
You can check Editor's log to see how much memory is taken by your assets, the rest are binary and engine's internal resources.
As a rough measurement, just try to deploy an empty project with a single scene on the desired platform and you'll figure it out.
On Unity 5 they started to modularize a little bit the engine (see this post).
One of the reasons, is space. One of the benefits is that you'll should be able in the future to build only modules relevant to your game (es. no need for physics? don't build PhysX).
In your player settings, you can change device filter to ARMv7 only, which will reduce your build size, but your compatibility with certain devices will suffer. Also, change Api compatibility level to .NET 2.0 subset and Stripping level to Strip Byte Code or even Use micro mscorlib. You can read more about these settings in the manual.
However, I must say that 20 Mb is pretty small for Unity application, and is pretty good from a product side of things. If, however, you begin to reach the 50 Mb limit, then you really need to worry. You'll have to implement OBB split if you decide to go over 50 Mb limit.
I found some info in the internet that Core-Image process images on CPU if any of it's size bigger than 2048 (width or height or both). And it looks to be true because applying CIFilter even on 3200x2000 image is very slow. If I do the same on 2000x2000 image it is much faster. Is it possible to tell Core-Image to process all images on GPU always? Or maybe information I found was incorrect?
Processing on the GPU is not always faster, because your image data first has to be loaded to the GPU memory, processed, and then transferred back.
You can use kCIContextUseSoftwareRenderer to force software rendering (on the CPU) but there is no constant to force rendering on a GPU, I'm afraid. Also, software rendering does not work in the Simulator.
The maximum size depends on the device you're working on. For iPhone 3GS/4 & iPad 1, it's 2048*2048. For later iPhone/iPad, it's 4096*4096. On OSX, it would depend on your graphic card and/or OS version (2, 4, 8, or 16K²).
One possible way around the limit is to tile your image into pieces below the limit, and process each tile separately. Then you'd have to put the pieces back together after.
I am creating an OpenGL video player using Ffmpeg and all my videos aren't power of 2 (as they are normal video resolutions). It runs at fine fps with my nvidia card but I've found that it won't run on older ATI cards because they don't support non-power-of-two textures.
I will only be using this on an Nvidia card so I don't really care about the ATI problem too much but I was wondering how much of a performance boost I'd get if the textuers were power-of-2? Is it worth padding them out?
Also, if it is worth it, how do I go about padding them out to the nearest larger power-of-two?
Writing a video player you should update your texture content using glTexSubImage2D(). This function allows you to supply arbitarily sized images, that will be placed somewhere in the target texture. So you can initialize the texture first with a call of glTexImage() with the data pointer being NULL, then fill in the data.
The performance gain of pure power of 2 textures strongly depends on the hardware used, but in extreme cases it may be up to 300%.
I want to build an app similar to Fat Booth, Aging Boot etc. I am totally noob to digital image processing. where should I start? Some hints?
Processing images on the iPhone with any kind of speed is going to require OpenGL ES. That would be the place to start. (If this is your first iOS project, though, I wouldn’t recommend starting off with GL.)
Apple has an image processing example available here: http://developer.apple.com/iphone/library/samplecode/GLImageProcessing/Introduction/Intro.html.
I imagine the apps you refer to use GL too. Fat Booth, for example, might texture a mesh with your photo, then distort the mesh to make the photo bulge out in the middle. It could also be done purely with fragment shaders.