how to meter power(watt) of PC components(cpu,memory,disk,etc) in real time? - api

As the question says ,I want to monitor the value of power(watts) that some components consumption .especially the value of CPU , Memory and disk .
when I use aida64,I found that in computer/sensor ,there are some data about power consumption . I want to know how did it can get these data ?
I already have some idea ,but not sure which is the best way to solve this question :
there are some sensors on the motherboard ,we can use values of those sensors to calculate the real-time power.
according to different OS, we have some APIs that can get the utilization of cpu,memory throughout rate and disk I/O rate . Using this data ,we can build mode of power consumption about PC.if there are those APIs,where can I find them ?
maybe the hardware manufacturer like intel has already record the value of power in real-time ,they put the value into some special register in hardware .we can get the value through mapping into special memory location .
In my opinion ,the second way maybe the solution that most monitor software using .but I just don't know where can I get those API.
whats more ,our aim is to design an OS-independent real-time power monitor software. So, if there are any better solutions about this question ,I will appreciate your help .

Hmmm. I wasn't sure if I should post this as a comment or an answer. It is an answer but in the negative.
At this time, you can't create an OS independent software-based non-intrusive power monitor. By non-intrusive, I mean that you are not putting special instrumentation on the motherboard and other hardware. This is because the power technology being used by modern processors is in rapid flux, each new generation making significant advances. Additionally, the amount of power related information available to software from the hardware (via PMU events and the like) is continually increasing as more silicon real estate becomes available. For example, I believe that in the most current processors, you can get direct thermal information for key parts of the processor silicon, and temperature, power and current readings from various parts of the core and uncore.
The best you can do is to abstract the top layer of your monitor from the lower layers. Then the top becomes OS / HW independent while the lower levels need to be platform dependent.
Check out the PAPI APIs. Note that the APIs appear to give you the world, but are really just an API set. Someone still has to implement what's on the other side of the API.
Now if you can do your own special instrumentation, many (most?) motherboards and other hardware have measurement points (some undocumented) that provide thermal, current (and so power) information. This information is important for debugging devices and platforms.

Related

Is it possible to have CAN on Arduino without extra hardware?

I would like to have Arduino operating in a CAN network. Does the software that provides OSI model network layer exist for Arduino? I would imagine detecting the HI/LOW levels with GPIO/ADC and sending the signal to the network with DAC. It would be nice to have that without any extra hardware attached. I don't mind to have a terminating resistor required by the CAN network though.
By Arduino I mean any of them. My intention is to keep the development environmen.
If such a software does not exist, is there any technical obstacle for that, like limited flash size (again, I don't mean particular board with certain Atmega chip).
You can write a bit banging CAN driver, but it has many limitations.
First it's the timeing, it's hard to achieve the bit timing and also the arbitration.
You will be able to get 10kb or perhaps even 50kb but that consumes a huge amount of your cpu time.
And the code itself is a pain.
You have to calculate the CRC on the fly (easy) but to implement the collision detection and all the timing parameters is not easy.
Once, I done this for a company, but it was a realy bad idea.
Better buy a chip for 1 Euro and be happy.
There are several CAN Bus Shield boards available (e.g: this, and this), and that would be a far better solution. It is not just a matter of the controller chip, the bus interface, line drivers, and power all need to be considered. If you have the resources and skills you can of course create your own board or bread-board for less.
Even if you bit-bang it via GPIO you would need some hardware mods I believe to handle bus contention detection, and it would be very slow and may not interoperate well with "real" CAN controllers on the bus.
If your aim is to communicate between devices of your own design rather than off-the shelf CAN devices, then you don't need CAN for that, and something proprietary will suffice, and a UART will perform faster that a bit-banged CAN implementation.
I don't think, that such software exists. CAN bus is more complex, than for example I2C. Basically you would have to implement functionality of both CAN controller and CAN transceiver. See this thread for more details (in German).
Alternatively you could use one of the CAN shields. Another option were to use BeagleBone with suitable CAN cape.
Also take a look at AVR-CAN.

how can build single board computer like Raspberry Pi for run OS?

my question is : how can build single board computer like Raspberry Pi for run OS ?
user ARM micro processor and debian arm os , can use USB and etc.
like raspberry pi and other single board computer
i search but find nothing for help me !!! :(
The reason you can find nothing is probably because it is a specialist task undertaken by companies with appropriate resources in terms of expertise, equipment, tools and money.
High-end microprocessors capable of running an OS such as Linux use high-pin-density surface mount packages such as BGA or TQFP, these (especially BGA) require specialist equipment to manufacture and cannot reliably or realistically be assembled by hand. The pin count and density necessitates the use of multi-layer boards, these again require specialist manufacture.
What you would have to do if you wanted your own board, is to design your board, source the components, and then have it manufactured by a contract electronics assembly house. Short runs and one-off's will cost you may times that of just buying a COTS development or application board. It is only cost-effective if you are ultimately manufacturing a product that will sell in high volumes. It is only these volumes that make the RPi so inexpensive (and until recently Chinese manufacture).
Even if you designed and had your own board built, that in itself requires specialist knowledge and skill. The bus speeds on such processors require very specific layout to maintain signal integrity and timing and to avoid EMC problems. The cost of suitable schematic capture and board layout software might also be prohibitive, no doubt there are some reasonably capable open source tools - but you will have to find one that generates output your manufacturer can use to set-up their machinery.
Some lower-end 8 bit microcontrollers with low pin count are suitable for hand soldering or even DIP socketing, using a bread-board or prototyping board, but that is not what you are after.
[Further thoughts added 14 Sep 2012]
This is probably only worth doing if one or more of the following are true:
Your aim is to gain experience in board design, manufacture and bring-up as an academic or career development exercise and you have the necessary financial resources.
You envisage high production volumes where the economies of scale make it less expensive than a COTS board.
You have product requirements for specific features or form-factor not supported by COTS boards.
You have restricted product requirements where a custom board tailored to those and having no redundant features might, in sufficient volumes be cost-effective.
Note that COTS boards come in two types: Application modules intended for integration in a larger system or product, and development boards that tend to have a wide range of peripherals, switches, indicators and connectivity options and often a prototyping area for your own use.
I know this is an old question, but I've been looking into the same thing, possibly for different reasons, and it now comes up at the top of a google search providing more reasons not to ask or even look into it than it provides answers.
For an overview of what it takes to build a linux running board from scratch this link is incredibly useful:
http://hforsten.com/making-embedded-linux-computer.html
It details:
The bare minimum you need in terms of hardware ( ARM processor, NAND flash etc )
The complexities of getting a board designed
The process of programming the new chip on the board to include bootloaders and then pointing them to a linux kernel for the chip to boot.
Whether the OP wishes to pursue every or just some of these challenges, it is useful to know what the challenges are.
And these won't be all of them, adding displays, graphics and other hardware and interfaces is not covered, but this is a start.
Single board computers(SBC) are expected to take more load than normal hobby board and so it has slightly complicated structure in terms of PCB and components. You should be ready to work with BGA packages. Almost all of processors in SBCs are BGA (no DIP/QAFP). Here is the best blogpost that I recently came across. Its very nicely designed and fabricated board running Linux on ARM processor. Author has really done a great job at designing as well as documenting the process. I hope it helps you to understand both hardware and software side of SBCs.
A lot of answers are discouraging. But, I would say you can do it, as I have done it already with imx233. Its not easy, its not a weekend project. My project link is MyIMX233.
It took me about 4-5months
It didn't cost me much, a small fine tip soldering iron is what I used.
The hard part is learning to design PCB.
Next task would be to find a PCB manufacturer with good enough precision, and prototyping price.
Next task would be to source components.
You may not get it right, I got the PCB right by my 3rd iteration. After that I was able to repeatedly produce 3 more boards all of which worked fine.
PCB Design - I used opensource KiCAD. You need to take care in doing impedance matching between RAM and processor buses, and some other high speed buses. I managed to do it in 2 layer board with 5mil/5mil trace space.
Component Sourcing - I got imx233 LQFP once via mouser, and once via element14.
RAM - 64MB tssop.
Soldering - I can say its easy to mess up here, but key is patience. And one caution don't use frying pan and solder past to do reflow soldering. I literally fried my first 2 processors like this. Even hot air soldering by a mobile repair shop was also not good enough.
Boot loading image - I didn't take much chance here, just went with Archlinux image by olimex.
If you want to skip the trouble of circuit designing between RAM & processor, skip imx233 and go for Allwinner V3S. In 2017/2018 this would be easiest approach.
Bottom line is I am a software engineer by profession, and if I can do it, then you can do it.
Why not using an FPGA board?
Something with Zynq like the Zybo board or from Altera like the DE0-Nano SoCKit.
There you already have the ARM core, memory, etc... plus the possibility to add the logic you miss.

How to decide system requirements for embedded systems application/software

How should I decide system requirements like:
RAM capacity
FLASH memory capacity
Processor frequency
etc
I am building an application to control NAND FLASH, LCD driver, UART control, keypad control using a 16 bit micro-controller.
This has to be estimated from previous projects with similar functionality. Or even other people's products. But it is best to develop with a larger capacity and decide on final parts when your software nears completion, because its easier to omit components than to try and find room for them later. This kind of design can be an iterative process, start with one estimate and see if a prototype works, don't commit to volumes until you are nearly at the end.
In the case of an LCD based product, you will have two major components using up flash memory, the code and the LCD data (character strings, bitmaps etc). Its certainly easier to estimate the LCD data than the code, which depends on functionality, compiler optimisations etc. If you are bringing in external libraries then at least you already have code for them.
In any case, have an upgrade plan. The worst thing is to run out of capacity at the end of the project and be struggling to optimise the last feature/debug solution in without adding another problem. Make sure you know what the next size up chips are and how you can get them to fit, sometimes a PCB can be designed to take various different chips in the same position. Or have an expandable system, where you can plug things into a memory bus.
How many units will you be making ?
If your volumes are low (<1e3), but per unit profits high and time to market matters, more hardware will get the developers done sooner.
If the volumes are huge (>1e6), profits per unit low, then you penny pinch the hardware, but time to develop will go up. If time to market matters, that's a tradeoff.
Design the board with 2x the capacity (RAM/flash), but don't load the parts, other than to check it works.
Then if you run out of room, there is somewhere to go.
Will customers expect to get firmware updates ? Or is this a drop-ship product with no support ? Supportable is harder, needs more resources.
You'll need to pad resources to have room to expand into if the product needs support for a long time.
For CPU frequency estimates, how much work is required to be done ?
Get an Eval board for a likely MCU and prove out the core function.
Let us say it's a display for a piece of exercise equipment. Can it keep up with the sensors on the device at 2-3x the designed pace ? That's reading the sensors and updating the display. If cost is required to be low, you can then underclock the eval board adn see what trades can be made.

LabVIEW + National Instruments hardware or ???

I'm in the processes of buying a new data acquisition system for my company to use for various projects. At first, it's primary purpose will be to monitor up to 20 thermocouples and control the temperature of a composites oven. However, I also plan on using it to monitor accelerometers, strain gauges, and to act as a signal generator.
I probably won't be the only one to use it, but I have a good bit of programming experience with Atmel microcontrollers (C). I've used LabVIEW before, but ~5 years ago. LabVIEW would be good because it is easy to pick up on for both me and my coworkers. On the flip side, it's expensive. Right now I have a NI CompactDAQ system with 2 voltage and one thermocouple cards + LabVIEW speced out and it's going to cost $5779!
I'm going to try to get the same I/O capabilities with different NI hardware for less $ + LabVIEW to see if I can get it for less $. I'd like to see if anyone has any suggestions other than LabVIEW for me.
Thanks in advance!
Welcome to test and measurement. It's pretty expensive for pre-built stuff, but you trade money for time.
You might check out the somewhat less expensive Agilent 34970A (and associated cards). It's a great workhorse for different kinds of sensing, and, if I recall correctly, it comes with some basic software.
For simple temperature control, you might consider a PID controller (Watlow or Omega used to be the brands, but it's been a few years since I've looked at them).
You also might look into the low-cost usb solutions from NI. The channel count is lower, but they're fairly inexpensive. They do still require software of some type, though.
There are also a fair number of good smaller companies (like Hytek Automation) that produce some types of measurement and control devices or sub-assemblies, but YMMV.
There's a lot of misconception about what will and will not work with LabView and what you do and do not need to build a decent system with it.
First off, as others have said, test and measurement is expensive. Regardless of what you end up doing, the system you describe IS going to cost thousands to build.
Second, you don't NEED to use NI hardware with LabView. For thermocouples your best bet is to look into multichannel or multiple single-channel thermocouple units - something that reads from a thermocouple and outputs to something like RS-232, etc. The OMEGABUS Digital Transmitters are an example, but many others exist.
In this way, you need only a breakout card with lots of RS-232 ports and you can grow your system as it needs it. You can still use labview to acquire the data via RS-232 and then display, log, process, etc, it however you like.
Third party signal generators would also work, for example. You can pick up good ones (with GPIB connection) reasonably cheaply and with a GPIB board can integrate it into LabView as well. This if you want something like a function generator, of course (duty cycled pulses, standard sine/triangle/ramp functions, etc). If you're talking about arbitrary signal generation then this remains a reasonably expensive thing to do (if $5000 is our goalpost for "expensive").
This also hinges on what you're needing the signal generation for - if you're thinking for control signals then, again, there may be cheaper and more robust opitons available. For temperature control, for example, separate hardware PID controllers are probably the best bet. This also takes care of your thermocouple problem since PID controllers will typically accept thermocouple inputs as well. In this way you only need one interface (RS-232, for example) to the external PID controller and you have total access in LabView to temperature readings as well as the ability to control setpoints and PID parameters in one unit.
Perhaps if you could elaborate on not just the system components as you've planned them at present, but the ultimaty system functionality, it may be easier to suggest alternatives - not simply alternative hardware, but alternative system design altogether.
edit :
Have a look at Omega CNi8C22-C24 and CNiS8C24-C24 units -> these are temperature and strain DIN PID units which will take inputs from your thermocouples and strain gauges, process the inputs into proper measurements, and communicate with LabView (or anything else) via RS-232.
This isn't necessarily a software answer, but if you want low cost data aquisition, you might want to look at the labjack. It's basically a microcontroller & usb interface wrapped in a nice box (like an arduino (Atmel AVR + USB-Serial converter) but closed source) with a lot of drivers and functions for various languages, including labview.
Reading a thermocouple can be tough because microvolts are significant, so you either need a high resolution A/D or an amplifier on the input. I think NI may sell a specialized digitizer for thermocouple readings, but again you'll pay.
As far as the software answer, labview will work nicely with almost any hardware you choose -- e.g. I built my own temperature controller based on an arduino (with an AD7780) wrote a little interface using serial commands and then talked with it using labview. But if you're willing to pay a premium for a guaranteed to work out of the box solution, you can't go wrong with labview and an NI part.
LabWindows CVI is NI's C IDE, with good integration with their instrument libraries and drivers. If you're willing to write C code, maybe you could get by with the base version of LabWindows CVI, versus having to buy a higher-end LabView version that has the functionality you need. LabWindows CVI and LabView are priced identically for the base versions, so
that may not be much of an advantage.
Given the range of measurement types you plan to make and the fact that you want colleagues to be able to use this, I would suggest LabVIEW is a good choice - it will support everything you want to do and make it straightforward to put a decent GUI on it. Assuming you're on Windows then the base package should be adequate and if you want to build stand-alone applications, either to deploy on other PCs or to make a particular setup as simple as possible for your colleagues, you can buy the application builder separately later.
As for the DAQ hardware, you can certainly save money - e.g. Measurement Computing have a low cost 8-channel USB thermocouple input device - but that may cost you in setup time or be less robust to repeated changes in your hardware configuration for different tests.
I've got a bit of experience with LabView stuff, and if you can afford it, it's awesome (and useful for a lot of different applications).
However, if your applications are simple you might actually be able to hack together something with one or two arduino's here, it's OSS, and has some good cheap hardware boards.
LabView really comes into its own with real time applications or RAD (because GUI dev is super easy), so if all you're doing is running a couple of thermopiles I'd find something cheaper.
A few thousand dollars is not a lot of money for process monitoring and control systems. If you do a cost/benefit analysis, you will very quickly recover your development costs if the scope of the system is right and if it does the job it is intended to do.
Another tool to consider is National Instruments measurement studio with VB .NET. This way you can still use the NI hardware if you want and can still build nice gui's quickly.
Alternatively, as others have said, it is perfectly viable to get industrial serial based instruments and talk to them with LabVIEW, VB .NET, c# or whatever you like.
If you go down the route of serial instruments, another piece of hardware that might be useful is a serial terminal (example). These allow you to connect arbitrary numbers of devices to your network. You computers can then use them as though they were physical COM ports.
Have you looked at MATLAB. They have a toolbox called Data Acquisition. compactDAQ is a supported hardware.
LabVIEW is a great visual programming environment. In terms if we want to drag,drop and visualize our system. NI Hardware also comes with the NIDAQmx Library which can be accessed through our code. Probably a feasible solution for you would be to import the libraries into another programming language and write code for all the activities which otherwise you were going to perform using LabVIEW. Though other overheads like code optimization would be the users responsibility, you are free to tweak the normal method flow, by introducing your own improvements at suitable junctures in the DAQ process.

Testing Real Time Operating System for Hardness

I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.
What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?
Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.
With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.
To clarify Bob's answer maybe:
Use the signal generator to generate a pulse at some varying frequency.
Random distribution across some range would be best.
use the signal generator (trigger signal) to start the scope.
the RTOS has to respond, do it thing and send an output pulse.
feed the RTOS output into input 2 of the scope.
get the scope to persist/collect mode.
get the scope to start on A , stop on B. if you can.
in an ideal workd, get it to measure the distribution for you. A LeCroy would.
Start with a much slower trace than you would expect. You need to be able to see slow outliers.
You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS.
(This won't really happen in practice, but if you don't get outliers it is reasonably useful. )
If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work.
Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve.
Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.
If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.
Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)
Link from this Question to the paper when you've written it.
Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.
Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.
In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.
You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.
This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks.
Running a full RTOS with heavy computational load you probably only get soft real-time.
Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.
Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.
These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.
Hope this helps
I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.
I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.
I think that this is not a hard real-time device, since it runs no RTOS.
I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.
Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.
The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.
Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.
Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.
So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.
P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.