Using a counter to run a model every n frames on Android - mediapipe

I am new to mediapipe and I want to know if it is possible to run a tflite model every n frames?
For example I only want to run the palm detection model once every 10 frames instead of continuously on a Android not Python.

Yes it is possible. You can create your own calculator that counts the frames. What I ended up doing was making my own custom flow limiter calculator that had a count for each non dropped frame.
Based on this count you can add an Allow packet (Boolean) as output to the flow limiter calculator and use the Allow packet with a gate calculator to stop or start certain models (palm detection) every n counts.

Related

How deepmind reduce the calculation for Q values for Atari games?

We know q-learning need tons of calculations:
The huge amount of states in q-learning calculation
For a gaming AI, it needs much more q-values than OX game, GO game.
How this is to be done to calculate these large amounts of q-values?
Thanks.
MCTS didn't actually reduce any calculation for q-values.
For a very simple Atari gaming AI, it needs much more than 3^(19x19) q values.
Check the deep q network, that solved your problem.
We could represent our Q-function with a neural network, that takes
the state (four game screens) and action as input and outputs the
corresponding Q-value. Alternatively we could take only game screens
as input and output the Q-value for each possible action. This
approach has the advantage, that if we want to perform a Q-value
update or pick the action with highest Q-value, we only have to do one
forward pass through the network and have all Q-values for all actions
immediately available.
https://neuro.cs.ut.ee/demystifying-deep-reinforcement-learning/

Issue updating labview waveform chart

I have a labview program where I am collecting data at 2 Hz. I have 8 channels of data I need to plot on a waveform chart. However, due to the program needing to be ran for long periods of time, I run into issues with memory and storing all the data on the chart. I would like to have it be a user input update frequency, but I cannot figure out how to do it. I tried passing the data in through a loop, but it would never execute.
To paint a clearer picture, I want to plot every other data point or further in between. I don't need all the data points on the plot.
You can utilize the master-slave code setup and have an event triggered update. If you need to you can create a global var file to store your data, then when you trigger the update it will read it from that.
I tend to always separate the UI in labview into its' own thread this way and it works well for what you're describing.

Input live data into AnyLogic

I'm currently a Mechanical Engineering student that is looking into a project on Intelligent Manufacturing.
I have been using AnyLogic to explore manufacturing simulation. I have created a basic Jobshop that involves that transportation of material pallets from delivery to storage to processing. My next step is to transition this static scheduling system to a dynamic scheduling system.
I would like to know if there is any way to actively manipulate the simulation whilst it is running? For example, controlling the availability of processing machines in real time or triggering a delivery. So far I have been unable to find any way of manipulating the simulation after it has been run.
Does anybody have experience with real time data input into simulation software?
In your model, you can always add control elements (buttons, check boxes, sliders, etc). By adding these in the models you can control your model on runtime. For instance... if you have a variable X equal to 3 in your model, if you use a button, you can add the code X=4; and the variable X will change its value.
My suggestion is for you to explore the different options in the controls palette and refer yourself to the anylogic help to learn how to use each of them.
These controls must be placed in "main" in order to make changes when the simulation is running. If you place them in the simulation experiment window, then you won't be able to use them on runtime.
Your model will look like this:

tensorflow one of 20 parameter server is very slow

I am trying to train DNN model using tensorflow, my script have two variables, one is dense feature and one is sparse feature, each minibatch will pull full dense feature and pull specified sparse feature using embedding_lookup_sparse, feedforward could only begin after sparse feature is ready. I run my script using 20 parameter servers and increasing worker count did not scale out. So I profiled my job using tensorflow timeline and found one of 20 parameter server is very slow compared to the other 19. there is not dependency between different part of all the trainable variables. I am not sure if there is any bug or any limitation issues like tensorflow can only queue 40 fan out requests, any idea to debug it? Thanks in advance.
tensorflow timeline profiling
It sounds like you might have exactly 2 variables, one is stored at PS0 and the other at PS1. The other 18 parameter servers are not doing anything. Please take a look at variable partitioning (https://www.tensorflow.org/versions/master/api_docs/python/state_ops/variable_partitioners_for_sharding), i.e. partition a large variable into small chunks and store them at separate parameter servers.
This is kind of a hack way to log Send/Recv timings from Timeline object for each iteration, but it works pretty well in terms of analyzing JSON dumped data (compared to visualize it on chrome://trace).
The steps you have to perform are:
download TensorFlow source and checkout a correct branch (r0.12 for example)
modify the only place that calls SetTimelineLabel method inside executor.cc
instead of only recording non-transferable nodes, you want to record Send/Recv nodes also.
be careful to call SetTimelineLabel once inside NodeDone as it would set the text string of a node, which will be parsed later from a python script
build TensorFlow from modified source
modify model codes (for example, inception_distributed_train.py) with correct way of using Timeline and graph meta-data
Then you can run the training and retrieve JSON file once for each iteration! :)
Some suggestions that were too big for a comment:
You can't see data transfer in timeline that's because the tracing of Send/Recv is currently turned off, some discussion here -- https://github.com/tensorflow/tensorflow/issues/4809
In the latest version (nightly which is 5 days old or newer) you can turn on verbose logging by doing export TF_CPP_MIN_VLOG_LEVEL=1 and it shows second level timestamps (see here about higher granularity).
So with vlog perhaps you can use messages generated by this line to see the times at which Send ops are generated.

How to save Gnuradio Waterfall Plot?

I want to measure spectrum Occupancy of any one of the GSM band using Gnuradio and a USRP for 24 hours.
Is there any way to save the waterfall plot of gnuradio in image file or any other format?
If not is there any other way to show the spectrum occupancy for certain amount of time in one image or graph?
Is there any way to save the waterfall plot of gnuradio in image file or any other format?
Middle-mouse-button -> Save.
If not is there any other way to show the spectrum occupancy for certain amount of time in one image or graph?
This is a typical case for "offline processing and visualization". I'd recommend you just build a GNU Radio flow graph that takes the samples from the USRP, applies decimating band pass filters (best case: in shape of the GSM pulse shape), and then calculates the power of the resulting sample streams (complex_to_mag_squared) and then just saves these power vectors.
Then you could later easily visualize them with e.g. numpy/matplotlib, or whatever tool you prefer.
The problem really is that GSM spectrum access happens in the order of microseconds, and you want to observe for 24 hours – no visualization in this world can both represent accurately what's happening and still be compact. You will need to come up with some intelligent measure built atop of the pure occupancy information.