tensorflow facemesh delay in tracking landmark points - tensorflow

I am using face-mesh for one of ma POC the landmarks and detection are absolutely superb.
But what i found is while moving the face the tracking is little delayed ? is there any configuration available to sort this out ? when i move quickly it taking even more time to detect landmarks on right place. i tried with both wasm and webgl. perfomance wise absolutely no problem here but. tracking is delaying which is not giving the expected result.
is anything i'm missing to get this right ? any help on this is highly appreciated

A small delay is to be expected as inference will probably be tens of milliseconds at least from my experience.
What FPS are you getting? It will very much depend on your hardware of course. As WASM improves hopefully things will get faster too for free on CPU.
On my machine (Desktop running GTX 1070) I get around 50 FPS in WebGL and 25 FPS with just the i7 CPU via WASM only.
From what you have said I do not think you have missed anything. If you could post a screen recording of the situation and live demo eg on CodePen or Glitch, that would could help me compare with what I am seeing to confirm this.

Related

Debugging CPU consumption and dragging lag in MapBox Android 8.x

I'm using MapBox for Android v8.3.0-beta.1 (similar results on the previous 8.x versions), and the problem I'm facing is that when I drag/pan the map around I see noticeable lag as if camera does not follow the movements. It results in very poor user experience, and besides, CPU load jumps often to over 100% for given process.
Admittedly, the hardware is the low-end spec. Nevertheless, I'd like to improve the UX and fix panning performance if possible. Ideally if I can diagnose what causes the lags.
I've disabled all the animations, tried the simplest app with just a map, tried to enable layers one by one. It didn't make significant difference, the map is still jerky from time to time.
Apparently, versions prior to 7.x performed better.

Why not 1 CPU core in place of multiple cores?

I've just read something about how CPU cores interact with each other. I can be wrong on some points so don't hesitate to correct me.
So a CPU will basically run instructions that are stored in the L2 or L3 cache. These instructions are addresses that reference to an object in the DRAM.
A multi-core CPU will be able to run more instructions, this will result in better performance. But there is a little problem with that: these cores have to interact with each other, and this is slowing down a little bit the process.
So now that I go back to my question: Why do we not use 1 CPU with bigger cache? As I think, this should give more performance for less costs? Right?
I know these are some basic things that you should know lol. I feel a little bit weird asking this.
Any answer would be welcome!
Multiple cores means you have duplicated circuitry which allow you to do more work in parallel. Each core has its own L1 dcache and icache along with their own registers, decode units, execution pipelines etc.
Just having a bigger cache and a 20 ghz clock won't give you as much performance because you still have to share all other resources.
As someone said to me, I'm forgetting clock speed of CPU's.
LOL
Just Imagine a single core 20GHz; the refresh rate would be too fast to interact with the RAM. In other terms, that means that too much data would be lost, which would result in a crash.
Same holds true with overclocking. -_-

Any way to get stable/consistent FPS form kinect?

I am trying to record kinect files in .oni format, that I will later try to synchronize with other sensors. As such, it is very important that I get consistent fps, even if some frames are repeats.
From what I can see now, WaitAndUpdateAll does not guarantee that the frame rate is consistent. I will be recording for several minutes (20+), so I need to make sure there is no drift!
Does anyone know if it's possible to lock down the fps of the recording, and if not, how stable the recording fps of the kinect is? Thanks!
After some investigation of this issue, I put together the following write up on the topic:
http://denislantsman.com/?p=50
Putting it here so interested people can find it and not have to wrestle with this issue.
My guess would be to go with the PCL libary since the developers also work together with the ROS team where they also have to sync sensors a lot. But be warned I wasn't able capture XYZRGB clouds at 30 FPS on windows 7. If you only need XYZ to be captured you should be fine. Worst case you have to time stamp and sync all your data by yourself.

Creating a heater application

This might seem weird, but I'm interesting in creating an electric heater out of my computer, that is program an application, that heats up my PC, and I need some help.
I currently made an application, that runs infinite loops on the GPU (using a little shader), and on the CPU cores, however I'm interesting in getting the ram going too, as well as the several output ports, so.. About the ram heating, just allocate, and start randomly accessing and writing using all 8 cores?
And what about triggering CD-ROM, floppy etc, how do I do this?
How about heater with a purpose? Just run World Community Grid, create tons of heat while making your computer do valuable computations for science. It runs the processors wide open, is stable, and isn't just wasting cycles.
Have a look at How to stress test a computer If your interested in making your own try searching for open source stress test software that you could modify to your liking.
Use Furmark together with LinX/Prime95. Max out your settings. Make sure you have a strong enough PSU.
There`s a torture test option for CPU & RAM in Prime95 that looks like what you want. As for the GPU, there is Furmark which achieves the same kind of stress.
The heat from the other components will likely be not relevant (unless you have something really specific like a physx card) if you stress enough your cpu and gpu imho.

(When) Does hardware, especially the CPU(s), deliver wrong results?

What I'm talking about is: Is it possible that under certain circumstances the CPU "buggs" and suddenly responses 1+1=2?
In which parts of the computer can that happen (HDD, RAM, Mainboard)?
What could be the causes? Bad quality? Overheating?
Does that even happen? When yes, how frequently?
If everything is okay with the CPU (not a single fault in production, good temperature), can that still happen sometimes?
What would be the results of, let's say one to three wrong computations?
This is programming related as it would be nice to know if you can even rely on the hardware to return the right results.
It can happen in all hardware; it happens quite often in RAM chips. There are mechanisms to detect and correct such errors, but in regards to RAM, only in the more expensive ECC chips. See Wikipedia's article on RAM and Error Correction
Also interesting is the article on Error Detection and Correction in general.
One example: http://en.wikipedia.org/wiki/Pentium_FDIV_bug
What I'm talking about is: Is it
possible that under certain
circumstances the CPU "buggs" and
suddenly responses 1+1=2?
Yes
In which parts of the computer can
that happen (HDD, RAM, Mainboard)?
All of them
What could be the causes? Bad quality?
Overheating?
The most common cause is overclocking. Less common causes include faulty hardware.
If everything is okay with the CPU
(not a single fault in production,
good temperature), can that still
happen sometimes?
It can be a ram problem like I said above, or really anything.
What would be the results of, let's
say one to three wrong computations?
I don't understand this question.. You mean what would happen to the program? It would probably segfault but impossible to say. You mean what would 1+1 result into? Impossible to say. You mean what would happen if 1 in 3 computations were to fail on average? The computer wouldn't even boot.
Well first you need to find an Computer Engineer who thinks that 1+1=2 is a bug and that its a hardware problem which needs to be fixed.
#Andreas Bonini, Midhat and Pekka: In such incidences it would be highly recommended to take a maths course on April Fool's day.
Andrew Appel had a great demo a few years ago where he started a lecture by lighting a 100W bulb under a PC running Java. Within 20 minutes there were enough memory errors that he could exploit one to crack the Java virtual machine and take it over.
Cool your hardware!