Need assistance with building a LabVIEW setup for Pressure/Temperature/RPM/Voltage/Amperage - labview

I'm working on assembling a LabVIEW setup that has the ability to measure Pressure, Temperature, RPM, Voltage and Amperage. I believe I have identified the correct modules but am looking for another opinion before giving anyone a lot of money.
Requirements:
Temperature: Able to measure 7 channels of temperatures ranging from
ambient to 300 degrees F.
RPM: Able to measure shaft RPM up to 3600.
Voltage: Able to measure up to 500 Volts, 3 phase AC.
Amperage: Able to measure up to 400 Amps, 3 phase AC.
Pressure: Able to measure 2 channels of various ranges of PSI
(specific transducers to be identified at a later date).
The Gear:
Chassis: cDAQ-9174 with the PS-14.
Temperature: T type thermocouples and NI-9212.
RPM: Monarch Instrument Remote Optical Laser Sensor and NI-9421.
Laser uses 24 volts but returns 19 volts when target is present and 0
volts when the target is not present.
Voltage*: ATO three phase AC voltage sensor ATO-VOS-3AC500 outputting
0-5 volts and either NI-9224 or NI-9252.
Amperage*: 3, Fluke i400 units returning 1mV per Amp and either
NI-9224 or NI-9252.
Pressure: 2, 4-20mA 2 or 3 wire pressure transducers to be identified
at a later date, and either NI-9203 or NI-9253.
*Voltage and amperage will be measured on the same unit
Questions:
RPM: Will the NI-9421 record a pulse of 19 volts?
Voltage and Amperage: What is difference between the NI-9224 and the
NI-9252, which one would work best for my application?
Pressure: What is the difference between the NI-9203 and the NI-9253
other than input resolution and which one would work best for my
application? Resolution is not a priority.
Overall: Anything stand out as a red flag?
I have not tried any of this equipment out myself.
Thanks in advance for your expertise and patience.

First things first, I would encourage you to strike up a conversation with whichever NI distributor is local to you. Checking specifications and compatibility between sensors, modules, chassis, etc. is very much in their wheelhouse, and typically falls in the pre-sales phase of discussion so you shouldn't need to spend money to get their expertise.
(Also, if you're new to LabVIEW and NI: I very much recommend checking out the NI forums in addition to Stack Exchange. Both are generally pretty helpful communities!)
One thing I'm not seeing in the requirements you listed that would be very helpful are timing requirements/sample rates. What frequency do you need to sample each of these inputs, and for how long? How much jitter and skew between samples is acceptable? Building a table of signal characteristics including: original project specification, specification in units of the measurement device, minimum sample rate, analog/digital, and which module the channel is on will make configuring a chassis to meet your needs a lot easier.
For a cDAQ system the sample rates you measure at, and how many different ones you can run at one time, is determined by the chassis rather than the module. (PCI/PXI data acquisition cards have the timing engine on the card.) For the cDAQ-9174 you can run multiple tasks per chassis but only one task per module. You may need to group your inputs onto modules that run at similar rates to fit into the available tasks. I put a link to NI's documentation of the cDAQ timing engine at the bottom.
Now to try to summarize the questions:
Homer512 is correct about voltage, 11V is the ON threshold. However, the NI-9421 can only count pulses up to 10kHZ into the counter. How many pulses are generated per rotation? Napkin math says one pulse per rotation at 3600RPM means you're capturing a 216kHz pulse stream at minimum. (This is why timing is everything. You also probably don't want to transfer every single pulse to calculate the RPM constantly, more likely you need the counter to sum up the pulses as fast as they happen, and at a slower rate you check to see how many counts went by since your last check-in.)
Homer512 is correct again, NI 9252 has additional hardware filtering before the ADCs. This would be for frequency filters on the input source, not usually something you would use if you're just reading a 5V signal from a sensor.
NI 9203 uses a SAR ADC (200kS/s), NI 9253 uses a Delta-Sigma (50kS/s/ch). Long story short: NI 9253 is more accurate but slower. I'd need more information to make a best for application judgement, specifically numerical requirements on resolution and timing.
Red flags: kinda captured it in the other points, but the project requirements have some gaps. I've had " is not a priority" and requirements in a unit other than the measurement device (RPM vs. pulses/s or Hz) bite me enough times that I highly recommend having it written down even if it's blatantly obvious.
Links may move in the future, and the titles are weird, but here are a few relevant NI docs:
"Number of Concurrent Tasks on a CompactDAQ Chassis Gen II" https://www.ni.com/en-us/support/documentation/supplemental/18/number-of-concurrent-tasks-on-a-compactdaq-chassis-gen-ii.html
"cDAQ Module Support for Accessing On-board Counters" https://www.ni.com/en-us/support/documentation/supplemental/18/cdaq-module-support-for-accessing-on-board-counters.html

Related

ALSA Sample Rate drift vs monotonic clock

I have an application that samples audio at 8Khz using ALSA. This is set via snd_pcm_hw_params() and can be confirmed by looking at /proc:
cat /proc/asound/card1/pcm0c/sub0/hw_params
access: MMAP_INTERLEAVED
format: S32_LE
subformat: STD
channels: 12
rate: 8000 (8000/1)
period_size: 400
buffer_size: 1200
The count of samples read over time is effectively a monotonic clock.
If I compare the number of samples read with the system monotonic clock I note there is a drift over time. The sample clock appears to lose 1s roughly every 5 hours relative to the monotonic clock.
I have code to compensate for this at the application level (i.e. to correctly map sample counts to wall clock times) but I am wondering if we can or why we can't do better at a lower level?
Both clocks are based on oscillators of some kind which may have some small error. So likely we are sampling at 7999.5Khz rather than 8Khz and the error builds up over time. Equally the system clock may have some small error in it.
The system clocks are corrected periodically by NTP so perhaps can permit more error but even so this deviation seems much larger than I would intuitively effect.
However, see for example http://www.ntp.org/ntpfaq/NTP-s-sw-clocks-quality.htm
In theory NTP can generate a drift file which you could use to see the drift rate of your system clock.
I would have thought that knowing that there is some small error. Something would try to autocorrect itself either by swapping between two differently wrong sample rates e.g. 8000.5Khz & 7999.5Khz or dropping the occasional sample. In fact I thought this kind of thing was done at the hardware or firmware level in order to stabilize the average frequency given a crystal with a known error.
Also I would have thought quartz crystals are put in circuits these days with at least temperature compensation.

how to meter power(watt) of PC components(cpu,memory,disk,etc) in real time?

As the question says ,I want to monitor the value of power(watts) that some components consumption .especially the value of CPU , Memory and disk .
when I use aida64,I found that in computer/sensor ,there are some data about power consumption . I want to know how did it can get these data ?
I already have some idea ,but not sure which is the best way to solve this question :
there are some sensors on the motherboard ,we can use values of those sensors to calculate the real-time power.
according to different OS, we have some APIs that can get the utilization of cpu,memory throughout rate and disk I/O rate . Using this data ,we can build mode of power consumption about PC.if there are those APIs,where can I find them ?
maybe the hardware manufacturer like intel has already record the value of power in real-time ,they put the value into some special register in hardware .we can get the value through mapping into special memory location .
In my opinion ,the second way maybe the solution that most monitor software using .but I just don't know where can I get those API.
whats more ,our aim is to design an OS-independent real-time power monitor software. So, if there are any better solutions about this question ,I will appreciate your help .
Hmmm. I wasn't sure if I should post this as a comment or an answer. It is an answer but in the negative.
At this time, you can't create an OS independent software-based non-intrusive power monitor. By non-intrusive, I mean that you are not putting special instrumentation on the motherboard and other hardware. This is because the power technology being used by modern processors is in rapid flux, each new generation making significant advances. Additionally, the amount of power related information available to software from the hardware (via PMU events and the like) is continually increasing as more silicon real estate becomes available. For example, I believe that in the most current processors, you can get direct thermal information for key parts of the processor silicon, and temperature, power and current readings from various parts of the core and uncore.
The best you can do is to abstract the top layer of your monitor from the lower layers. Then the top becomes OS / HW independent while the lower levels need to be platform dependent.
Check out the PAPI APIs. Note that the APIs appear to give you the world, but are really just an API set. Someone still has to implement what's on the other side of the API.
Now if you can do your own special instrumentation, many (most?) motherboards and other hardware have measurement points (some undocumented) that provide thermal, current (and so power) information. This information is important for debugging devices and platforms.

two analog channel affect each other in pic

Iam doing a project to recognize gestures by reading adc values in pic 16f73 using embedded c. Everything works fine while using single adc channel. When i use multiple channels, values are affected each other. is this a hardware error or software problem?
Probably. It's very likely to be one, or the other, or both. Split problem in half.
Eliminate one at a time. Scope/meter on both analog inputs. Change one input - does the other change too? If it does, there is a hardware issue at least. If not, it's software.
This is debugging 101.
It's a hardware effect, but not an error.
From the datasheet:
11.1 A/D Acquisition Requirements
For the A/D converter to meet its specified accuracy,
the charge holding capacitor (CHOLD) must be allowed
to fully charge to the input channel voltage level. The
analog input model is shown in Figure 11-2. The source
impedance (RS) and the internal sampling switch (RSS)
impedance directly affect the time required to charge
the capacitor CHOLD. The sampling switch (RSS)
impedance varies over the device voltage (VDD), see
Figure 11-2. The source impedance affects the offset
voltage at the analog input (due to pin leakage current).
The maximum recommended impedance for analog sources is 10 kΩ. After the analog input channel is
selected (changed), the acquisition period must pass
before the conversion can be started.
To calculate the minimum acquisition time, TACQ, see
the PICmicro™ Mid-Range MCU Family Reference
Manual (DS33023). In general, however, given a maximum source impedance of 10 kΩ and at a temperature
of 100°C, TACQ will be no more than 16 µsec.
It will likely be because you have high impedance sources driving all the ADC pins. When the multiplexer switches from one input to the next, any charge that is stored on the sampling capacitor of the ADC from the previous input will still be there.
If you drive each input with the output of a suitable op amp, when the ADC's multiplexer switches, the op amp is able to drive charge in or suck charge out of the sampling capacitor and the time needed for the new input you are reading can be significantly reduced. Plus, with this method you are not loading the voltage you are wanting to read.
If you cannot drive with a low impedance source, then ensure you have plenty of time for the new input's value to settle.

How to decide system requirements for embedded systems application/software

How should I decide system requirements like:
RAM capacity
FLASH memory capacity
Processor frequency
etc
I am building an application to control NAND FLASH, LCD driver, UART control, keypad control using a 16 bit micro-controller.
This has to be estimated from previous projects with similar functionality. Or even other people's products. But it is best to develop with a larger capacity and decide on final parts when your software nears completion, because its easier to omit components than to try and find room for them later. This kind of design can be an iterative process, start with one estimate and see if a prototype works, don't commit to volumes until you are nearly at the end.
In the case of an LCD based product, you will have two major components using up flash memory, the code and the LCD data (character strings, bitmaps etc). Its certainly easier to estimate the LCD data than the code, which depends on functionality, compiler optimisations etc. If you are bringing in external libraries then at least you already have code for them.
In any case, have an upgrade plan. The worst thing is to run out of capacity at the end of the project and be struggling to optimise the last feature/debug solution in without adding another problem. Make sure you know what the next size up chips are and how you can get them to fit, sometimes a PCB can be designed to take various different chips in the same position. Or have an expandable system, where you can plug things into a memory bus.
How many units will you be making ?
If your volumes are low (<1e3), but per unit profits high and time to market matters, more hardware will get the developers done sooner.
If the volumes are huge (>1e6), profits per unit low, then you penny pinch the hardware, but time to develop will go up. If time to market matters, that's a tradeoff.
Design the board with 2x the capacity (RAM/flash), but don't load the parts, other than to check it works.
Then if you run out of room, there is somewhere to go.
Will customers expect to get firmware updates ? Or is this a drop-ship product with no support ? Supportable is harder, needs more resources.
You'll need to pad resources to have room to expand into if the product needs support for a long time.
For CPU frequency estimates, how much work is required to be done ?
Get an Eval board for a likely MCU and prove out the core function.
Let us say it's a display for a piece of exercise equipment. Can it keep up with the sensors on the device at 2-3x the designed pace ? That's reading the sensors and updating the display. If cost is required to be low, you can then underclock the eval board adn see what trades can be made.

Testing Real Time Operating System for Hardness

I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.
What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?
Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.
With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.
To clarify Bob's answer maybe:
Use the signal generator to generate a pulse at some varying frequency.
Random distribution across some range would be best.
use the signal generator (trigger signal) to start the scope.
the RTOS has to respond, do it thing and send an output pulse.
feed the RTOS output into input 2 of the scope.
get the scope to persist/collect mode.
get the scope to start on A , stop on B. if you can.
in an ideal workd, get it to measure the distribution for you. A LeCroy would.
Start with a much slower trace than you would expect. You need to be able to see slow outliers.
You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS.
(This won't really happen in practice, but if you don't get outliers it is reasonably useful. )
If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work.
Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve.
Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.
If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.
Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)
Link from this Question to the paper when you've written it.
Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.
Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.
In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.
You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.
This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks.
Running a full RTOS with heavy computational load you probably only get soft real-time.
Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.
Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.
These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.
Hope this helps
I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.
I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.
I think that this is not a hard real-time device, since it runs no RTOS.
I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.
Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.
The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.
Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.
Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.
So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.
P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.