RF module and European RF standards - module

My experience with RF are almost negligible, but now I'm in position to use some RF modules- probably this one.
There exists a statement that confuses me, saying: Two frequency variants are available in the European unlicensed band, one with 5mW RF power with no duty cycle restriction other with 25mW RF power with 1% duty cycle restriction.
I intend to use 25mW option and my understanding of what above statement means is that if I want to transmit something, I have to send serial data for 1ms (for example) and then to be silent next 100ms. Am I right, or maybe module does some buffering and take into account of 1%, itself, or ... ?

I believe the duty cycle refers to how the data is modulated, so the 1% duty cycle means the carrier switching can only allow the module to transmit for 1% of the time. This will be fully controlled by the 25mW module, so you don't need to worry about it.
The 5mW module uses less transmit power, and probably extends the pulse duty cycle to compensate for this, to provide similar data rate and transmission distance.
Good luck with your project.

Related

Need assistance with building a LabVIEW setup for Pressure/Temperature/RPM/Voltage/Amperage

I'm working on assembling a LabVIEW setup that has the ability to measure Pressure, Temperature, RPM, Voltage and Amperage. I believe I have identified the correct modules but am looking for another opinion before giving anyone a lot of money.
Requirements:
Temperature: Able to measure 7 channels of temperatures ranging from
ambient to 300 degrees F.
RPM: Able to measure shaft RPM up to 3600.
Voltage: Able to measure up to 500 Volts, 3 phase AC.
Amperage: Able to measure up to 400 Amps, 3 phase AC.
Pressure: Able to measure 2 channels of various ranges of PSI
(specific transducers to be identified at a later date).
The Gear:
Chassis: cDAQ-9174 with the PS-14.
Temperature: T type thermocouples and NI-9212.
RPM: Monarch Instrument Remote Optical Laser Sensor and NI-9421.
Laser uses 24 volts but returns 19 volts when target is present and 0
volts when the target is not present.
Voltage*: ATO three phase AC voltage sensor ATO-VOS-3AC500 outputting
0-5 volts and either NI-9224 or NI-9252.
Amperage*: 3, Fluke i400 units returning 1mV per Amp and either
NI-9224 or NI-9252.
Pressure: 2, 4-20mA 2 or 3 wire pressure transducers to be identified
at a later date, and either NI-9203 or NI-9253.
*Voltage and amperage will be measured on the same unit
Questions:
RPM: Will the NI-9421 record a pulse of 19 volts?
Voltage and Amperage: What is difference between the NI-9224 and the
NI-9252, which one would work best for my application?
Pressure: What is the difference between the NI-9203 and the NI-9253
other than input resolution and which one would work best for my
application? Resolution is not a priority.
Overall: Anything stand out as a red flag?
I have not tried any of this equipment out myself.
Thanks in advance for your expertise and patience.
First things first, I would encourage you to strike up a conversation with whichever NI distributor is local to you. Checking specifications and compatibility between sensors, modules, chassis, etc. is very much in their wheelhouse, and typically falls in the pre-sales phase of discussion so you shouldn't need to spend money to get their expertise.
(Also, if you're new to LabVIEW and NI: I very much recommend checking out the NI forums in addition to Stack Exchange. Both are generally pretty helpful communities!)
One thing I'm not seeing in the requirements you listed that would be very helpful are timing requirements/sample rates. What frequency do you need to sample each of these inputs, and for how long? How much jitter and skew between samples is acceptable? Building a table of signal characteristics including: original project specification, specification in units of the measurement device, minimum sample rate, analog/digital, and which module the channel is on will make configuring a chassis to meet your needs a lot easier.
For a cDAQ system the sample rates you measure at, and how many different ones you can run at one time, is determined by the chassis rather than the module. (PCI/PXI data acquisition cards have the timing engine on the card.) For the cDAQ-9174 you can run multiple tasks per chassis but only one task per module. You may need to group your inputs onto modules that run at similar rates to fit into the available tasks. I put a link to NI's documentation of the cDAQ timing engine at the bottom.
Now to try to summarize the questions:
Homer512 is correct about voltage, 11V is the ON threshold. However, the NI-9421 can only count pulses up to 10kHZ into the counter. How many pulses are generated per rotation? Napkin math says one pulse per rotation at 3600RPM means you're capturing a 216kHz pulse stream at minimum. (This is why timing is everything. You also probably don't want to transfer every single pulse to calculate the RPM constantly, more likely you need the counter to sum up the pulses as fast as they happen, and at a slower rate you check to see how many counts went by since your last check-in.)
Homer512 is correct again, NI 9252 has additional hardware filtering before the ADCs. This would be for frequency filters on the input source, not usually something you would use if you're just reading a 5V signal from a sensor.
NI 9203 uses a SAR ADC (200kS/s), NI 9253 uses a Delta-Sigma (50kS/s/ch). Long story short: NI 9253 is more accurate but slower. I'd need more information to make a best for application judgement, specifically numerical requirements on resolution and timing.
Red flags: kinda captured it in the other points, but the project requirements have some gaps. I've had " is not a priority" and requirements in a unit other than the measurement device (RPM vs. pulses/s or Hz) bite me enough times that I highly recommend having it written down even if it's blatantly obvious.
Links may move in the future, and the titles are weird, but here are a few relevant NI docs:
"Number of Concurrent Tasks on a CompactDAQ Chassis Gen II" https://www.ni.com/en-us/support/documentation/supplemental/18/number-of-concurrent-tasks-on-a-compactdaq-chassis-gen-ii.html
"cDAQ Module Support for Accessing On-board Counters" https://www.ni.com/en-us/support/documentation/supplemental/18/cdaq-module-support-for-accessing-on-board-counters.html

So while the chip select is enabled, if the clock speed varies but still within the range specified, would there be spi communication?

While communicating with the slave will there be SPI communication if the clock speed and frequency varies?
Usually the specification on the part should give a minimal timing requirements for the signal. But usually it give no upper limit.
That means, the time between level changes should be at least as it said in the specification, but in is not required to be at any particular frequency, or to have equal periods for different pulses. You can even pause communication by holding level unchanged for a long period of time.
Still, in some devices there may be special requirements on the frequency of pulses and it's maximal duration. For example, ADC parts can rely on SPI-clock to perform measurements. Making uneven or too long SPI clocks may have an influence upon the result.
So, the answer is: in either case carefully read the datasheet on the part you're using.

Does laptop Embedded Controller have limited writes?

I am wondering if I should be worried about excessive writes to the embedded controller registers on my laptop. I am guessing that if they are true registers, they probably act more like RAM rather than flash memory so this isn't a problem.
However, I have a script to modify the registers in my laptop's EC to better control the fan speed curve. It has to be re-applied after each power change event such as sleep/wake as well as power cable events, so it happens fairly often. I just want to make sure I am not burning out my chips in the process.
The script I am using to write to the EC is located here:
https://github.com/RayfenWindspear/perl-acpi-fanspeed
Well, it seems you're writing to ACPI registers. Registers here do not refer to any specific hardware; it just means its a specific address that you can reach using a specific bus. It's however highly unlikely that something that you have to re-write after every power cycle is overwriting permanent storage, so for all practical aspects I'd assume that you can rely on this for as long as your laptop lives.
Hardware peripherals are almost universally implemented as SRAM cells. They will not wear out first. The fan you are controlling will have a limited number of start/stop cycles. So it is much more likely that the act of toggling these registers will wear something else out prematurely (than the SRAM type memory cell itself).
To your particular case, correctly driving a fan/motor can significantly improve it's life time. Over driving a fan/motor does not always make it go faster, but instead creates heat. The heat weakens the wiring and eventually the coils will short reducing drive and eventually wearing out. That said, the element being cooled can be damaged by excess heat so tuning things just to reduce sound may not make sense.
Background details
Generally, the element is called a Flip-Flop with various forms. SystemRDL is an example as well as SystemC and others where digital engineers will model these. In digital hardware, the flip-flops have default or reset values. This is fixed like ROM on each chip and is not normally re-programmable, uses EEPROM technologyNote1 or is often configured via input lines which the hardware designer can pull them high/low with a resistor or connect them to another elements 'GPIO'.
It is analogous to 'initdata'. Program values that aren't zero get copied from flash, disk, etc to memory at program startup. So the flip-flops normally do not hold state over a power cycle; something else does this.
The 'Flash' technology is based off of a floating gate and uses 'quantum tunnelling' to program the floating gates. This process is slightly destructive. It was invented by Fowler and Nordheim in 1967, but wide spread electronics industry did not start to produce them until the early 90s with NOR flash followed by NAND flash and many variants. But the underlying physics is the same; just the digital connections are different. So as well as this defect you are concerned about, the flash technology actually followed many hardware chips such as 68k, i386, etc. So 'flip-flops' were well established and typically the 'register' part of the logic is not that great of a typical chip and a flip-flop uses the same logic (gates) as the rest of the chip logic. Meaning that using flash would be an extra overhead with little benefit.
Some additional take-away is that the startup up and shutdown of chips is usually the most destructive time. Often poor hardware designers do not put proper voltage supervision and some lines maybe floating with the expectation that system programs will set them immediately. Reset events, ESD, over heating, etc will all be more harmful than just the act of writing a peripheral register.
Note 1: EEPROM typically has 100,000+ cycles. These features are typically only used once at manufacture time to set a chip configuration for the system. These are actually quite rare, but possible.
The MLC (multi-level) NAND flash in SSD has pathetically low cycles like 8,000 in some cases. The SLC (single level) old school flash have 10,000+ cycles, but people demand large data formats.

How to decide system requirements for embedded systems application/software

How should I decide system requirements like:
RAM capacity
FLASH memory capacity
Processor frequency
etc
I am building an application to control NAND FLASH, LCD driver, UART control, keypad control using a 16 bit micro-controller.
This has to be estimated from previous projects with similar functionality. Or even other people's products. But it is best to develop with a larger capacity and decide on final parts when your software nears completion, because its easier to omit components than to try and find room for them later. This kind of design can be an iterative process, start with one estimate and see if a prototype works, don't commit to volumes until you are nearly at the end.
In the case of an LCD based product, you will have two major components using up flash memory, the code and the LCD data (character strings, bitmaps etc). Its certainly easier to estimate the LCD data than the code, which depends on functionality, compiler optimisations etc. If you are bringing in external libraries then at least you already have code for them.
In any case, have an upgrade plan. The worst thing is to run out of capacity at the end of the project and be struggling to optimise the last feature/debug solution in without adding another problem. Make sure you know what the next size up chips are and how you can get them to fit, sometimes a PCB can be designed to take various different chips in the same position. Or have an expandable system, where you can plug things into a memory bus.
How many units will you be making ?
If your volumes are low (<1e3), but per unit profits high and time to market matters, more hardware will get the developers done sooner.
If the volumes are huge (>1e6), profits per unit low, then you penny pinch the hardware, but time to develop will go up. If time to market matters, that's a tradeoff.
Design the board with 2x the capacity (RAM/flash), but don't load the parts, other than to check it works.
Then if you run out of room, there is somewhere to go.
Will customers expect to get firmware updates ? Or is this a drop-ship product with no support ? Supportable is harder, needs more resources.
You'll need to pad resources to have room to expand into if the product needs support for a long time.
For CPU frequency estimates, how much work is required to be done ?
Get an Eval board for a likely MCU and prove out the core function.
Let us say it's a display for a piece of exercise equipment. Can it keep up with the sensors on the device at 2-3x the designed pace ? That's reading the sensors and updating the display. If cost is required to be low, you can then underclock the eval board adn see what trades can be made.

Testing Real Time Operating System for Hardness

I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.
What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?
Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.
With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.
To clarify Bob's answer maybe:
Use the signal generator to generate a pulse at some varying frequency.
Random distribution across some range would be best.
use the signal generator (trigger signal) to start the scope.
the RTOS has to respond, do it thing and send an output pulse.
feed the RTOS output into input 2 of the scope.
get the scope to persist/collect mode.
get the scope to start on A , stop on B. if you can.
in an ideal workd, get it to measure the distribution for you. A LeCroy would.
Start with a much slower trace than you would expect. You need to be able to see slow outliers.
You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS.
(This won't really happen in practice, but if you don't get outliers it is reasonably useful. )
If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work.
Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve.
Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.
If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.
Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)
Link from this Question to the paper when you've written it.
Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.
Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.
In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.
You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.
This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks.
Running a full RTOS with heavy computational load you probably only get soft real-time.
Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.
Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.
These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.
Hope this helps
I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.
I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.
I think that this is not a hard real-time device, since it runs no RTOS.
I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.
Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.
The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.
Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.
Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.
So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.
P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.