What is a good process I/O rate? - testing

I have conducted a performance testing on e-commerce website and trying to find some bottlenecks. From Azure application insight>performance i checked the process I/O rate.
As you can see from the picture the process I/O rate was 33.57 during the performance test duration. But i am not sure if that's a good or not. Can please advise me on what is a good I?o rate for e-commerce application? Thanks

Over millions of recorded servers in the Live Optics program the Read Ratio is 69% and the average IO Transfer size is 34.4K. Just for simplicity sake, let's round to 32K. Most environments will not have a single IO transfer size.

Related

What is difference between upload speed and upload throughput?

I am trying to simulate different network speeds using selenium
Maybe I'm missing the point of the question but:
"Bandwidth and throughput have to do with speed, but what's the difference? To be brief, bandwidth is the theoretical speed of data on the network, whereas throughput is the actual speed of data on the network."
Pretty much: bandwidth is what your ISP will market to you, but your throughput is what you'll actually get on your side, in terms of speed. Throughput will almost always be lower than the marketed/advertised bandwidth.
source:
https://study.com/academy/lesson/bandwidth-vs-throughput.html#:~:text=Lesson%20summary,fast%20data%20is%20being%20sent.&text=Bandwidth%20refers%20to%20the%20theoretical,data%20on%20your%20network%20travels.
Possibly the term upload speed in a broader aspect indicates the internet speed where for uploading and downloading you need speed. Bandwidth and Throughput are the two major indicators of speed where:
Bandwidth is the theoretical speed of data on the network.
Throughput is the actual speed of data on the network.
Bandwidth
In true essence, Bandwidth refers to the maximum amount of data you can get from point A to point B in a specific amount of time. Thesedays while dealing with computers Bandwidth refers to, how many bits of information we can theoretically transmit in specific amount of time, as an example bits per second. E.g. Kbps (kilobits per second) and Mbps (megabits per second).
Throughput
Throughput can only send as much as the bandwidth will allow and is actually less than that as factors like latency (delays), jitter (irregularities in the signal), and error rate (actual mistakes during transmission) reduces the overall throughput.
I think what you are looking for is the method to do it.
Sets Chromium network emulation settings.
driver.set_network_conditions(
offline=False,
latency=5, # additional latency (ms)
download_throughput=500 * 1024, # maximal throughput
upload_throughput=500 * 1024) # maximal throughput
Note: 'throughput' can be used to set both (for download and upload).
Source

Is it meaningful to monitor physical memory usage on AIX?

Due to AIX's special memory-using algorithm, is it meaning to monitor the physical memory usage in order to find out the memory bottleneck during performance tuning?
If not, then what kind of KPI am i supposed to keep eyes on so as to determine whether we need to enlarge the RAM capacity or not?
Thanks
If a program requires more memory that is available as RAM, the OS will start swapping memory sections to disk as it sees fit. You'll need to monitor the output of vmstat and look for paging activity. I don't have access to an AIX machine now to illustrate with an example, but I recall the man page is pretty good at explaining what data is represented there.
Also, this looks to be a good writeup about another AIX specfic systems monitoring tool, and watching your systems overall memory (svgmon).
http://www.aixhealthcheck.com/blog.php?id=255
To track the size of your individual application instance(s), there are several options, with the most common being ps. Again, you'll have to check the man page to get information on which options to use. There are several columns for memory sz per process. You can compare those values to the overall memory that's available on your machine, and understand, by tracking over time, if your application is only increasing is memory, or if it releases memory when it is done with a task.
Finally, there's quite a body of information from IBM on performance tuning for AIX, but I was never able to find a road map guide to reading that information. A lot of it assumes you know facts and features that aren't explained in the current doc set, so you then have to try and find an explanation, which oftens leads to searching for yet another layer of explanations. ! :^/
IHTH.

Accurate way to detect rendering speed

I'm currently brainstorming for an idea of mine that involves a p2p render farm, somewhat like renderfarm.fi but in the difference that you pay for the service and contributors to the processing pool get paid.
Currently renderfarms measure price based on GHZ/h, but when the computers rendering are untrusted is there a good way to measure the equivilant GHZ/h of a computer, considering the computers could be partially loaded with other programs slowing down true time spent rendering, etc?
Because your worker process can ask the OS counters how much execution time they've received and that can be matched up with progress on the work package you can pay out based upon work-units-completed, but charge in GHz/h. You know you can't trust the user's clock (or anything else for that matter), but you can verify the work units returned and approximate their computational complexity by combining the program counters from multiple peers.
You have no way to know for sure that the system is or is not particularly loaded, but you do know if work went out and came back. However you will have to verify that the work was done correctly. Probably means over-provisioning and running every render twice on two different machines to ensure someone isn't inserting garbage results that are faster to compute.
Good luck. I don't know how you'll be able to beat the likes of Amazon with them charging ~$0.10 per GHz/h.
The operating system can, and most likely will measure the actual CPU time taken up by the process. As such, that can be used as a measure of how much practical time the process itself has spent running on the machine's CPU. The CPU time doesn't get skewed in any direction due to other processes running on the background, so it's very ideal for this purpose.
The CPU time itself is the resource such rendering services sell, as such it's logical to measure it per user/client basis and then price the service accordingly by the CPU time spent by the user/client of the render farm.

Testing Real Time Operating System for Hardness

I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.
What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?
Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.
With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.
To clarify Bob's answer maybe:
Use the signal generator to generate a pulse at some varying frequency.
Random distribution across some range would be best.
use the signal generator (trigger signal) to start the scope.
the RTOS has to respond, do it thing and send an output pulse.
feed the RTOS output into input 2 of the scope.
get the scope to persist/collect mode.
get the scope to start on A , stop on B. if you can.
in an ideal workd, get it to measure the distribution for you. A LeCroy would.
Start with a much slower trace than you would expect. You need to be able to see slow outliers.
You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS.
(This won't really happen in practice, but if you don't get outliers it is reasonably useful. )
If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work.
Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve.
Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.
If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.
Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)
Link from this Question to the paper when you've written it.
Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.
Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.
In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.
You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.
This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks.
Running a full RTOS with heavy computational load you probably only get soft real-time.
Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.
Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.
These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.
Hope this helps
I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.
I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.
I think that this is not a hard real-time device, since it runs no RTOS.
I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.
Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.
The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.
Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.
Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.
So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.
P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.

Is there an ideal number of network operations for iPhone OS?

I'm using NSOperation and NSOperationQueue to handle all of my networking threads so my interface can remain responsive while handling data transfer over the internet. Currently, I've got my operation queue set to a maximum concurrent operation count of 5, and it seems to work well.
I'm wondering, though, if there is a more ideal number of concurrent network operations that would best maximize the available resources without choking the hardware. Are there any recommendations, or steps I might take to measure and find out for myself?
Given the iPhone (currently) runs a single core, I would guess 5 is around the right number.
But the only way to be sure would be to instrument it and find out what the usage looks like (CPU, Memory and Network). Network usage you could get based on the data transferred - but its hard to know what a reaspnable usage would be. I'm not sure if it is possible to get CPU/Memory statistics from the iPhone.
If you are doing large transfers, then more connections probably wont help much. If you are doing lots of small transfers, then more connections will help work around the back and forth of setting up and tearing down the connection.