How HeapMemoryUsagePercent is Calculated in JDK Mission Control (JMC)? - jvm

I wrote a program using JMX API to calculate JVM Usage percentage. Code makes use of 2 attributes of java.lang:type=Memory MBean (used and max attribute).
When I use the formula (used/max * 100) to calculate JVM Used Memory Percent, it gives me very different value than what is displayed in JMC Software. For example:
In JMC I see the Percentage as 45.3%. But used = 708MB and max = 6GB, which in turn in my code gives very less percentile.
From Task Manager > Process Tab > Memory Column, I looked for the corresponding TomEE memory usage which is closed to the usage attribute in JMC.
Need some guidance here on what is the right way to calculate or look for the JVM Usage Percentage. Why there is a difference of percentage in JMC attribute (HeapMemoryUsagePercent) and the percentage calculation in my code?

Related

measuring time between two rising edges in beaglebone

I am reading sensor output as square wave(0-5 volt) via oscilloscope. Now I want to measure frequency of one period with Beaglebone. So I should measure the time between two rising edges. However, I don't have any experience with working Beaglebone. Can you give some advices or sample codes about measuring time between rising edges?
How deterministic do you need this to be? If you can tolerate some inaccuracy, you can probably do it on the main Linux OS; if you want to be fancy pants, this seems like a potential use case for the BBB's PRU's (which I unfortunately haven't used so take this with substantial amounts of salt). I would expect you'd be able to write PRU code that just sits with an infinite outerloop and then inside that loop, start looping until it sees the pin shows 0, then starts looping until the pin shows 1 (this is the first rising edge), then starts counting until either the pin shows 0 again (this would then be the falling edge) or another loop to the next rising edge... either way, you could take the counter value and you should be able to directly convert that into time (the PRU is states as having fixed frequency for each instruction, and is a 200Mhz (50ns/instruction). Assuming your loop is something like
#starting with pin low
inner loop 1:
registerX = loadPin
increment counter
jump if zero registerX to inner loop 1
# pin is now high
inner loop 2:
registerX = loadPin
increment counter
jump if one registerX to inner loop 2
# pin is now low again
That should take 3 instructions per counter increment, so you can get the time as 3 * counter * 50 ns.
As suggested by Foon in his answer, the PRUs are a good fit for this task (although depending on your requirements it may be fine to use the ARM processor and standard GPIO). Please note that (as far as I know) both the regular GPIOs and the PRU inputs are based on 3.3V logic, and connecting a 5V signal might fry your board! You will need an additional component or circuit to convert from 5V to 3.3V.
I've written a basic example that measures timing between rising edges on the header pin P8.15 for my own purpose of measuring an engine's rpm. If you decide to use it, you should check the timing results against a known reference. It's about right but I haven't checked it carefully at all. It is implemented using PRU assembly and uses the pypruss python module to simplify interfacing.

Method to get non-base units?

Is there a method of using the exponent properties of LabView units for carrying custom units? For example I would find it convenient to use milli-Amperes instead of Amperes in my data wires.
My first attempt at doing so looks like this, but trying to get the value out at the end gives me nothing.
I would find it convenient to use milli-Amperes instead of Amperes in my data wires
For a wire, it's not possible, and it's not a problem, here's why:
I'm afraid what you want make little sense, since you're milli-Amperes instead of Amperes refers to representing your data, while a wire is just raw data. Adding the milli- to a floating point changes the exponent, not the mantissa, so there's no loss or gain of precision in the value that your number carries.
Now if we talk about an indicator which is technically a display of the wire value, you change the unit from "A" to "mA" to have the display you want.
Finally, in your attempt with "set numeric info", the -3 factor added next to Amperes means the unit is A^-3, not mA.
You can use data that don't use units, however than you will loose your automatic check of the units.
For display properties you can tweak the display format to show different outputs:
This format string is constructed as following:
% numeric
^ engineering notation, exponents in multiples of three
# no trailing zeros
_6 six significat digits
e scientific notation (1e1 for instance)
The prefix is the best way to affect the presentation of the value on a specific front panel.
When passing data from VI to VI, the prefix is not passed, and the data uses the base ( Amps, Volts, etc...)
In my example below, the unitless value 3 is assigned units of Amp in mA.vi. The front panel indicator is set to show units of mA.
In Watts.vi I multiply the Amps OUT of mA.vi by a constant of 9V and the result is wired to the indicator x*y.
x*y has units of W and I changed the prefix to k for presentation.
The NI forums have several threads that report certain functions (square and square root specifically) can cause unit errors or broken wires. Most folks don't even know the units capability exists, and most that do have tried and abandoned them. :)

Threshold in Sharepoint Dashboard Designer For Percentage KPI

I have got a SSAS cube, that has a KPI which has a Value as a percentage. I also have a Goal which is also the target percentage to keep the Value below.
I create the KPI fine, but when I import it into the dashboard designer and set the scoring pattern and indicator ( I used tick, exlamation mark, cross - which gives 2 thresholds). It always shows the tick even though its way over the goal.
I have set it to be that decreasing is better and the banding method is "Band by stated score", but it always shows on the scorecard as being On Target.
This is the threshold I currently have.
Is it something to do with having the goal as a percentage? Can anyone explain how Dashboard Designer thresholds work with percentages please??
Update : Seemed to get it to work by setting the thresholds against the actual values in stead.
The percentage works in terms of decimal.
0 = 0%
0.5 = 50%
1 = 100%
In your scenario, you can use -0.01, 0.8, 1, 1.01
Thanks,
Merin
I eventually found the answer to this for anyone who wants to know still.
I did indeed need the "Band by stated score" banding method, the thing I was missing was where does the score come from? This is not immediately obvious but it is in the "Specify the worst value" section when editing the banding settings.
This needs to be set correctly against the threshold values.

Maximal input length/Variable input length for TinyGP

i am planning to use tinyGP as a way to train a set of Input variables (Around 400 or so) to a value set before. Is there a maximum size of Input variables? Do i need to specify the same amount of variables each time?
I have a lot of computation power (500 core cluster for a weekend) so any thoughts on what parameters to use for such a large problem?
cheers
In TinyGP your constant and variable pool share the same space. The total of these two spaces cannot exceede FSET_START, which is essentially the opcode of your first operator. By default is 110. So your 400 is already over this. This should be just a matter of increasing the opcode of the first instruction up to make enough space. You will also want to make sure you still have a big enough "constant pool".
You can see this checked with the following line in TinyGP:
if (varnumber + randomnumber >= FSET_START )
System.out.println("too many variables and constants");

how to analyze min max loc returned by opencv's cvMatchTemplate?

I am trying to detect objects in image on an iphone app.
I am using the cvMatchTemplate function, I manage to see some patterns returned by the cvMatchTemplate function (I chose CV_TM_CCOEFF_NORMED).
Positive Results (result image is 163x371):
http://encryptedpixel.files.wordpress.com/2011/07/photo-13-7-11-11-52-19-am.jpeg
cvMinMaxLoc returns: min (102,244) max(11,210)
The min point is making some sense here, the position of the dark spot is really 102,244 in the result image of 163x371
Negative Results:
cvMinMaxLoc returns: min (114,370) max(0,0)
This is not making sense, there is totally no results, why is there still a min point at 114,370?
I need to know how to analyze these results programatically so that I can say "Hey I found the object!" in objectiveC for iPhone app?
Thanks!
cvMinMaxLoc will always return the position of the minimum and maximum values of their input. It only "doesn't make sense" in your particular application. You should check the value at the returned position for the minimum and do something like threshold it to see if that's a probable match for your template. A template match will yield a very low or a very high value, depending on the method you chose.