Debugging CPU consumption and dragging lag in MapBox Android 8.x - rendering

I'm using MapBox for Android v8.3.0-beta.1 (similar results on the previous 8.x versions), and the problem I'm facing is that when I drag/pan the map around I see noticeable lag as if camera does not follow the movements. It results in very poor user experience, and besides, CPU load jumps often to over 100% for given process.
Admittedly, the hardware is the low-end spec. Nevertheless, I'd like to improve the UX and fix panning performance if possible. Ideally if I can diagnose what causes the lags.
I've disabled all the animations, tried the simplest app with just a map, tried to enable layers one by one. It didn't make significant difference, the map is still jerky from time to time.
Apparently, versions prior to 7.x performed better.

Related

Decreasing speed decreases sound quality

Decreasing the playback speed of AudioPlayer severely decreases the quality of the audio being played; the audio becomes very "noisy".
Is there any way to fix this or is it an issue with the just_audio implementation?
Reproduce:
final AudioPlayer player = AudioPlayer(); // Create audio player
player.setAsset("..."); // Load audio file
player.setSpeed(0.5); // Halve speed
player.play(); // Start playing
Just to preface this answer, time stretching is a difficult thing to do in real-time because it has to stretch time without stretching the sound waves (stretching the sound waves would lower the frequency and hence the pitch, so it has to stretch time while filling the gaps with fabricated extensions to the existing sound waves). As a result, the very best real time algorithm will still introduce artifacts and distortions.
Now to answer your question, just_audio doesn't provide any options to change the time stretching algorithm, but it does use the best available algorithms for each platform, for general purpose usage. The Android implementation uses Sonic which is better quality than Android's own built-in algorithm. On iOS/macOS, AVAudioTimePitchAlgorithmTimeDomain is used which seems to produce the least distortion at speeds below 1.0 out of the different algorithms Apple provides, although newer iPhones/iOS versions may produce higher quality output. On web browsers, it uses whatever algorithm that web browser provides.
If you need to try out alternatives, you would need to make a copy of just_audio and edit the code that selects the algorithm. You are unlikely to find better options for Android and web, but you might like to experiment with the different iOS/macOS algorithms by searching for AVAudioTimePitchAlgorithmTimeDomain in the code and changing it to one of the other options listed in Apple's documentation. You may find one of the other algorithms works better if you have a specialised use case.

HoloLens external rendering

Does soneone of you have a good solution for external rendering for Microsoft HoloLens Apps? Specified: Is it possible to let my laptop render an amount of 3D objects that is too much for the HoloLens GPU and then display them with the HoloLens by wifi including the spatial mapping and interaction?
It's possible to render remotely both directly from the unity editor and from a built application.
While neither achieves your goal of a "good solution" they both allow very intensive applications to at least run at all.
This walks you through how to add it to an app you're building.
https://learn.microsoft.com/en-us/windows/mixed-reality/add-holographic-remoting
This is for running directly from the editor:
https://blogs.unity3d.com/2018/05/30/create-enhanced-3d-visuals-with-holographic-emulation-in-uwp/
I don't think this is possible since, you can't really access the OS or the processor at all on the HoloLens. Even if you do manage to send the data to a 3rd party to process, the data will still need to be run back through the HoloLens which is really just the same as before.
You may find a way to perhaps hook up a VR backpack to it but even then, I highly doubt it would be possible.
If you are having trouble rendering 3D objects, then you should reduce the number of triangles, get a lower resolution shader on it, or reduce the size of the object. The biggest factor in processing 3D objects on the HoloLens is how much space is being drawn on the lens. If your object takes up 25% of the view instead of 100% it will be easier to process on the HoloLens.
Also if you can't avoid a lot of objects in the scene maybe check out LOD, which reduces the resolution of objects based off of distance to it and vice versa.

Glitches when using simple screen recorder on Arch Linux box

So i had been making video tutorials for my friends on how to program. On my old computer i had been all ways running simple screen recorder and it recorded fine. But recently i got a new computer. And so when i got a fresh install of arch linux on the box. I set up the environment with every thing i needed to make another video. When i downloaded simple screen recorder using yaourt, and started recording. I had recorded up to a two hour session with out knowing that it was glitching out. When i look at my computer i do not see the same issue as when the final product is done rendering. I think it might be a rendering error or i do not have the right codecs. After a hour or two searching on the web i could find no forum posts on the codec. I took in multiple things that could be wrong with it fps was my first choice but when i had recorded with 25 and even 50 fps it was still glitching out. The next idea i had was that i had the wrong codec H.264. But with searching i could find no solution to that one. Then i thought that i might have been encoding at to high of a speed (23). But still that proved me wrong. so now i am confused with how to get my answer.
Settings Screen shot:
Video Link:
https://www.youtube.com/watch?v=zfyIZiJCDa4
The glitches are often relate to the rendering backend of the window compositor you are using.
Solution 1 - Change the rendering backend of the window compositor
#thouliha reported having issues with compton. In my case I had glitches with openGL (2.0 & 3.1) and resolved the issue by switching to XRender for recording.
On KDE you easily change the rendering backend of the window compositor in the settings .
Solution 2 - Change the Tearing Prevention method
To keep using OpenGL, for example for better performance, you can also tweak the tearing prevention method.
In my case switching from Automatic to Never allowed me to record video with OpenGL compositor without glitches.
Solution 3 - Intel iGPU specific issues
Intel iGPU (Intel graphics) has some rendering issues with some CPUs.
You can check the Troubleshooting section of ArchLinux wiki to check those.
Example of features creating tearing or flickering related issues:
SNA
VSYNC
Panel Self Refresh (PSR)
Check also /etc/X11/xorg.conf.d/20-intel.conf if your system has put tweaks in here.
I'm not exactly sure what you mean by glitching out, especially since the video is down now, but I've found that the video is choppy when using compton, so I had to turn that off.

Cocos2dx performance issue on Windows Phone 8

I'm trying to port an android/iOS game to windows phone 8(cocos2dx v 2.2). I'm using the exact same code base that I've used for android and iOS. The game functions just fine, but I facing some major FPS drop. The game runs flawlessly at 60FPS in android and iOS, but I'm getting roughly about 35FPS on wp8. Has this got to do anything with differences in OpenGL and directX?
I doubt its got to do with the game's logic and calculations because when the game starts in windows phone, it starts with 60FPS on the main menu, which has got like 5 sprites. But as I add more sprites on the screen, say about 30 of them(average number of sprites when I'm IN the game) the FPS rapidly drops to 35-40 range. Note that there are no schedulers or update functions running at this point. I did the same test on Android, but the FPS didn't drop. Does the win8 port of cocos2dx suck?
Any help,comments or redirection to useful articles would be appreciated.
Thank you.
In case anyone runs into similar issue, I reduced the number of children in the scene and deployed the build in release mode. Gave a major boost to the FPS. Also, I had a bunch of float to string and int to string conversions happening in every frame inside the update function. That was eating away on the processing speed too.
Actually, the Cocos2dx port for WP8 is ok, but outdated. Cocos2d-x is now at 3.0 beta, but the WP8 was left at 2.0 alpha.
Anyway... in Cocos there are some recursive drawing functions which are very heavy on the CPU, and also, keep in mind that even though WP8 is supposed tu support arrays, lists, maps etc. they are very slow on WP8.
And since you came to this subject, Please let me know if you managed to successfully put cocos2d-x on an XAML+D3D Interop project. I am getting tons of crashes.
EDIT: Indeed, the recursive calls which process (draw or update) child "CCNode"s are very heavy on the device. However, after putting Cocos2d-x ver. 2.0alpha for WP8 into a XAML+D3D interop project, I found a whole lot of memory related issues. Apparently, after doing this (or just because I don't know how to properly configure my VS project and allow loose addressing), a lot of uninitialized pointers and data cause some memory overlaps, leading to major crashes.
This proves only that it was truely an alpha release :) Too bad no newer version of Cocos2d-x for Wp8 is available.

retrieve video ram usage iphone

i have see this article about retrieve memory usage off iphone app
programmatically-retrieve-memory-usage-on-iphone It's great !
In my project i want to retrieve the available VRAM free, because my app load many textures, and i must preload theses into the video Ram for fast rendering.
but on the VM_statistics i don't view theses properties : vm_statistics MAN page
Thanks a lot for your help.
As you've seen so far, getting hard numbers for GL texture memory usage is quite difficult. It's complicated further by the fact that CoreAnimation will also use GL Texture memory without "consulting" you, including from processes other than yours.
Practically speaking, I suggest that you use the VM Tracker instrument in Instruments to watch changes in the VM pages your process maps under the IOKit tag. It's a bit crude, but it's the best approach I've found. In my experience, this process is largely guess and check.
You asked specifically for a way to determine the amount of free VRAM, but even if you could get that info, it's not really likely to be helpful. Even if your app is totally OpenGL and uses no UIViews or CoreAnimation layers other processes, most importantly those more privileged than yours, can consume that memory at any time, either explicitly or implicitly through CoreAnimation. It's also probably safe to assume that if your app prevents those more-privileged apps from getting the texture memory they need, your process will be killed.
Put differently, even if you could ascertain the instantaneous state of the GL texture memory, you probably couldn't count on being the only consumer of that resource, so it's pretty useless.
At the end of the day, you should spend your effort designing your app to be a good citizen in terms of GL memory and manage (read: minimize) your own consumption of texture memory. iOS devices are not old-school game consoles -- you are not the only thing running -- so you need to be mindful and tolerant of that fact, lest your app be one of those where everyone has to reboot their phone every few minutes in order to use it.