vsphere powercli commands to check for cpu and memory utilization in terms of percentage - powercli

How do I get the CPU utilization and memory utilization in terms of percentage of the hosts?
Get-VMHost | Export-Csv "C:\checks.csv"
this gives me the output in terms of Mhz
Get-VMHost | Export-Csv "C:\checks.csv"

You'll need a couple PowerShell tricks in order to perform that task.
First, calculated properties. Get-VMHost doesn't have an existing property consisting of the CPU or Memory usage percentages. It does, however, contain both the usage and total amounts which we can use to create a percentage from. In order to display it, we'll make use of what's known as a calculated property which will allow us to create a custom property, that's in the form of a hashtable, where we perform that calculation at runtime.
Example: #{Name = 'CpuUsage'; Expression = {$_.CpuUsageMhz / $_.CpuTotalMhz}}
Second, we'll need to convert the calculated value into something easier to understand. We can use a format specifier that's part of the ToString method.
Example: #{Name = 'CpuUsage'; Expression = {($_.CpuUsageMhz / $_.CpuTotalMhz).ToString("P")}}
In the end, your code should look a bit like the following:
Get-VMHost | select Name, ConnectionState, PowerState, #{Name = 'CpuUsage'; Expression = {($_.CpuUsageMhz / $_.CpuTotalMhz).ToString("P")}}, #{Name = 'MemoryUsage'; Expression = {($_.MemoryUsageGB / $_.MemoryTotalGB).ToString("P")}}, Version | Export-Csv "C:\checks.csv"

Related

How HeapMemoryUsagePercent is Calculated in JDK Mission Control (JMC)?

I wrote a program using JMX API to calculate JVM Usage percentage. Code makes use of 2 attributes of java.lang:type=Memory MBean (used and max attribute).
When I use the formula (used/max * 100) to calculate JVM Used Memory Percent, it gives me very different value than what is displayed in JMC Software. For example:
In JMC I see the Percentage as 45.3%. But used = 708MB and max = 6GB, which in turn in my code gives very less percentile.
From Task Manager > Process Tab > Memory Column, I looked for the corresponding TomEE memory usage which is closed to the usage attribute in JMC.
Need some guidance here on what is the right way to calculate or look for the JVM Usage Percentage. Why there is a difference of percentage in JMC attribute (HeapMemoryUsagePercent) and the percentage calculation in my code?

What are common traps when writing JQ queries in terms on code complexity? Any profiling tools?

I have a 300-lines JQ code which run (literally hours) on the files I deal with (plain list of 200K-2.5M JSON objects, 500MB-6GB size).
On the first glance the code looks linear in complexity, but I can easily miss something.
Are there most common traps to be aware of in terms of code complexity in JQ? Or some tools to identify the key bottlenecks in my code?
I'm bit reluctant with making my code public, for size&complexity on one hand, and for its somewhat proprietary nature on the other.
PS. Trimming the input file to keep only most relevant objects AND pre-deflating it to keep only the fields I need are obvious steps towards optimizing my processing flow. I'm wondering what can be done specifically on query complexity side.
Often, a program that takes longer than expected is also producing incorrect results, so perhaps the first thing to check is that the results are correct. If they are, then the following might be worth checking:
avoid slurping (i.e., use input and/or inputs in preference);
beware of functions with arity greater than 0 that call themselves;
avoid recomputing intermediate results unnecessarily, e.g. by storing them in $-variables, or by including them in a filter's input;
use functions with "short-circuit" semantics when possible, notably any and all
use limit/2, first/1, and/or foreach as appropriate;
the implementation of index/1 on arrays can be a problem for large arrays, as it first computes all the indices;
remember that unique and group_by should be used carefully since both involve a sort.
use bsearch for insertion and for binary search for an item in a sorted array;
using JSON objects as dictionaries is generally a good idea.
Note also that the streaming parser (invoked with the --stream option) is designed to make the tradeoff between time and space in favor of the latter. It succeeds!
Finally, jq is stream-oriented, and using streams is sometimes more efficient than using arrays.
Since you are evidently not a beginner, the likelihood of your making beginners' mistakes seems small, so if you cannot figure out a way to share some details about your program and data, you might try
breaking up the program so you can see where the computing resources are being consumed. Well-placed debug statements can be helpful in that regard.
The following filters for computing the elapsed clock time might also be helpful:
def time(f):
now as $start | f as $out | (now - $start | stderr) | "", $out;
def time(f; $msg):
now as $start | f as $out | ("\(now - $start): \($msg)" | stderr) | "", $out;
Example
def ack(m;n):
m as $m | n as $n
| if $m == 0 then $n + 1
elif $n == 0 then ack($m-1; 1)
else ack($m-1; ack($m; $n-1))
end ;
time( ack(3;7) | debug)
Output:
["DEBUG:",1021]
0.7642250061035156
1021

How to implement a time based length queue in F#?

This is a followup to question: How to optimize this moving average calculation, in F#
To summarize the original question: I need to make a moving average of a set of data I collect; each data point has a timestamp and I need to process data up to a certain timestamp.
This means that I have a list of variable size to average.
The original question has the implementation as a queue where elements gets added and eventually removed as they get too old.
But, in the end, iterating through a queue to make the average is slow.
Originally the bulk of the CPU time was spent finding the data to average, but then once this problem was removed by only keeping the data needed in the first place, the Seq.average call proved to be very slow.
It looks like the original mechanism (based on Queue<>) is not appropriate and this question is about finding a new one.
I can think of two solutions:
implement this as a circular buffer which is large enough to accommodate the worst case scenario, this would allow to use an array and do only two iterations to make the sum.
quantize the data in buckets and pre-sum it, but I'm not sure if the extra complexity will help performance.
Is there any implementation of a circular buffer that would behave similarly to a Queue<>?
The fastest code, so far, is:
module PriceMovingAverage =
// moving average queue
let private timeQueue = Queue<DateTime>()
let private priceQueue = Queue<float>()
// update moving average
let updateMovingAverage (tradeData: TradeData) priceBasePeriod =
// add the new price
timeQueue.Enqueue(tradeData.Timestamp)
priceQueue.Enqueue(float tradeData.Price)
// remove the items older than the price base period
let removeOlderThan = tradeData.Timestamp - priceBasePeriod
let rec dequeueLoop () =
let p = timeQueue.Peek()
if p < removeOlderThan then
timeQueue.Dequeue() |> ignore
priceQueue.Dequeue() |> ignore
dequeueLoop()
dequeueLoop()
// get the moving average
let getPrice () =
try
Some (
priceQueue
|> Seq.average <- all CPU time goes here
|> decimal
)
with _ ->
None
Based on a queue length of 10-15k I'd say there's definitely scope to consider batching trades into precomputed blocks of maybe around 100 trades.
Add a few types:
type TradeBlock = {
data: TradeData array
startTime: DateTime
endTime: DateTime
sum: float
count:int
}
type AvgTradeData =
| Trade of TradeData
| Block of TradeBlock
I'd then make the moving average use a DList<AvgTradeData>. (https://fsprojects.github.io/FSharpx.Collections/reference/fsharpx-collections-dlist-1.html) The first element in the DList is summed manually if startTime is after the price period and removed from the list once the price period exceeds the endTime. The last elements in the list are kept as Trade tradeData until 100 are appended and then all removed from the tail and turned into a TradeBlock.

Optimizing code for better performance and quality

I have this calculation method that calculates 6 fields and total.
It works.
Question is how can I optimize it performance wise and code quality wise.
Just want to get some suggestions on how to write better code.
def _ocnhange_date(self):
date = datetime.datetime.now().strftime ("%Y-%m-%d %H:%M:%S")
self.date = date
self.drawer_potential = self.drawer_id.product_categ_price * self.drawer_qty
self.flexible_potential = self.flexible_id.product_categ_price * self.flexible_qty
self.runner_potential = self.runner_id.product_categ_price * self.runner_qty
self.group_1_potential = self.group_1_id.product_categ_price * self.group_1_qty
self.group_2_potential = self.group_2_id.product_categ_price * self.group_2_qty
self.group_3_potential = self.group_3_id.product_categ_price * self.group_3_qty
total = [self.drawer_potential,self.flexible_potential,self.runner_potential,self.group_1_potential,
self.group_2_potential,self.group_3_potential]
self.total_potentail = sum(total)
First things first: you should worry about performance mostly on batch operations. Your case is an onchange method, which means:
it will be triggered manually by user interaction.
it will only affect a single record at a time.
it will not perform database writes.
So, basically, this one will not be a critical bottleneck in your module.
However, you're asking how that could get better, so here it goes. It's just an idea, in some points just different (not better), but this way you can maybe see a different approach in some place that you prefer:
def _ocnhange_date(self):
# Use this alternative method, easier to maintain
self.date = fields.Datetime.now()
# Less code here, although almost equal
# performance (possibly less)
total = 0
for field in ("drawer", "flexible", "runner",
"group_1", "group_2", "group_3"):
potential = self["%s_id"].product_categ_price * self["%s_qty"]
total += potential
self["%s_potential"] = potential
# We use a generator instead of a list
self.total_potential = total
I only see two things you can improve here:
Use Odoo's Datetime class to get "now" because it already takes Odoo's datetime format into consideration. In the end that's more maintainable, because if Odoo decides to change the whole format system wide, you have to change your method, too.
Try to avoid so many assignments and instead use methods which allow a combined update of some values. For onchange methods this would be update() and for other value changes it's obviously write().
def _onchange_date(self):
self.update({
'date': fields.Datetime.now(),
'drawer_potential': self.drawer_id.product_categ_price * self.drawer_qty,
'flexible_potential': self.flexible_id.product_categ_price * self.flexible_qty,
# and so on
})

Maximal input length/Variable input length for TinyGP

i am planning to use tinyGP as a way to train a set of Input variables (Around 400 or so) to a value set before. Is there a maximum size of Input variables? Do i need to specify the same amount of variables each time?
I have a lot of computation power (500 core cluster for a weekend) so any thoughts on what parameters to use for such a large problem?
cheers
In TinyGP your constant and variable pool share the same space. The total of these two spaces cannot exceede FSET_START, which is essentially the opcode of your first operator. By default is 110. So your 400 is already over this. This should be just a matter of increasing the opcode of the first instruction up to make enough space. You will also want to make sure you still have a big enough "constant pool".
You can see this checked with the following line in TinyGP:
if (varnumber + randomnumber >= FSET_START )
System.out.println("too many variables and constants");