Why is rising edge preferred over falling edge - hardware

Flip-Flops(,Registers ...) are usually triggered by a rising or falling edge. But mostly in code you see an if-clause which uses the rising edge triggering. In fact i never saw a code with falling edge.
Why is that? Is it because naturally the programmers use rising edge, because they are used to, or is it because of some physical/analog law/fact, where the rising edge programming is faster/simpler/energy-efficient/... ?

As zennehoy says, it's convention - but one going back to when logic was done in discrete chips with a few gates or flipflops within them. Those packages of flipflops were always rising-edge triggered...as far as I recall, but maybe someone with better recollection of the yellow books will correct me!
So when synthesis came along, no doubt everyone felt comfortable carrying on that way!

Nothing more than a matter of convention.
Using the rising edge is more common, and most component libraries use the rising edge. This means that using those libraries requires you to also use rising edges, or add clock synchronization logic, or keep your paths so short that the delay is less than half a clock cycle. Just using rising edges everywhere is by far the easiest.

When you design a (single-edge) DFF in a chip, you must choose at which (rising or falling) clock edge it will operate. This decision is independent from the implementation approach (i.e., master-slave or pulsed-latch), and it does not alter the number of transistors in the DFF itself.
Since positive-edge is the typical default (as in FPGAs), to operate at the negative clock edge the usual procedure is to simply use a positive-edge DFF with an inverted version of the clock signal connected to its clock port. If this is done locally (near the DFF clock port), then two extra transistors are indeed needed (to build a CMOS inverter for the clock).

it is somewhat a matter of convention but if you look at the design of falling versus rising edge, there is only a difference of an added inverter, and it turned out to be 2 transistors less on rising edge
but there are designs out there that use both, for example in some data caches you write on rising edge and read on falling edge, or vice-versa depending on design choices!
good question, and try it out or take a course(maybe online) on digital integrated circuits

Related

How do you measure RSSIs of different parts of the spectrum(like FM, DVB-T and so on) using LabView?

I am doing a project on indoor localisation using fingerprinting. Is it possible to build a system in LabView which can scan the entire spectrum and provide me the RSSI measurements of different types of signals?(say FM, GSM, DVB-T and so on.) In case it has to be done separately, can someone please point me to some resources that would help me to find the RSSIs of say, FM signals? I am new to SDRs and would really appreciate some help. I have used this paper as a reference:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7444902
There can be no general method. RSSI is inherently signal-specific, and hence, for each signal type, you will need a different estimator. An estimator that estimates the received signal strength of FM broadcasts will only see noise in a DVB-T signal at incredibly high noise power with no stable signal at all, whereas a DVB-T RSSI indicator would only see a slowly moving interfere in an FM signal.
I see you're using the indoor-positioning-system tag. That is a very bad thing: In an indoor scenario, fading is extremely important, and your received signal strength allow absolutely no conclusions on the distance to the transmitter. This is pretty much the definition of the indoor channel, where lots and lots of reflections overlap and interefere, and there's not always a direct line of sight between transmitter and receiver.
I'm afraid you still have quite some theory to read up on.

Stm32 bldc driving

As explained here:
http://www.edn.com/design/sensors/4407580/Brushless-DC-Motors-Part-II--Control-Principles
, switching the motor windings should occur when the back-emf voltage across the 1/2 VDCC value. How to effectively perform that in stm32f4 which don't has embedded comparator module?
It seems the only way is using analog watchdog with selecting next single waited channel at every moment when interrupt happens?
And how to be if I want drive 4 bldc from single stm32 chip?
There are few ways you can achieve this. The most popular way with STM32's are sensing the floating phase. The technique is little different to what your link is suggesting, nevertheless there are plenty of example codes to get this going.
Here is a nice explanation of ST's patented 3 resistor BLDLC position sensing method (and some other techniques).
A nice starting point would be this manual.
STM32 supports for two motor control timers (TIM1 and TIM8). You can use them to drive 2 BLDC motors. Nonetheless, it would not limit you to use other timers in combination in order to drive more BLDCs but would demand some additional programming complexity.

Does laptop Embedded Controller have limited writes?

I am wondering if I should be worried about excessive writes to the embedded controller registers on my laptop. I am guessing that if they are true registers, they probably act more like RAM rather than flash memory so this isn't a problem.
However, I have a script to modify the registers in my laptop's EC to better control the fan speed curve. It has to be re-applied after each power change event such as sleep/wake as well as power cable events, so it happens fairly often. I just want to make sure I am not burning out my chips in the process.
The script I am using to write to the EC is located here:
https://github.com/RayfenWindspear/perl-acpi-fanspeed
Well, it seems you're writing to ACPI registers. Registers here do not refer to any specific hardware; it just means its a specific address that you can reach using a specific bus. It's however highly unlikely that something that you have to re-write after every power cycle is overwriting permanent storage, so for all practical aspects I'd assume that you can rely on this for as long as your laptop lives.
Hardware peripherals are almost universally implemented as SRAM cells. They will not wear out first. The fan you are controlling will have a limited number of start/stop cycles. So it is much more likely that the act of toggling these registers will wear something else out prematurely (than the SRAM type memory cell itself).
To your particular case, correctly driving a fan/motor can significantly improve it's life time. Over driving a fan/motor does not always make it go faster, but instead creates heat. The heat weakens the wiring and eventually the coils will short reducing drive and eventually wearing out. That said, the element being cooled can be damaged by excess heat so tuning things just to reduce sound may not make sense.
Background details
Generally, the element is called a Flip-Flop with various forms. SystemRDL is an example as well as SystemC and others where digital engineers will model these. In digital hardware, the flip-flops have default or reset values. This is fixed like ROM on each chip and is not normally re-programmable, uses EEPROM technologyNote1 or is often configured via input lines which the hardware designer can pull them high/low with a resistor or connect them to another elements 'GPIO'.
It is analogous to 'initdata'. Program values that aren't zero get copied from flash, disk, etc to memory at program startup. So the flip-flops normally do not hold state over a power cycle; something else does this.
The 'Flash' technology is based off of a floating gate and uses 'quantum tunnelling' to program the floating gates. This process is slightly destructive. It was invented by Fowler and Nordheim in 1967, but wide spread electronics industry did not start to produce them until the early 90s with NOR flash followed by NAND flash and many variants. But the underlying physics is the same; just the digital connections are different. So as well as this defect you are concerned about, the flash technology actually followed many hardware chips such as 68k, i386, etc. So 'flip-flops' were well established and typically the 'register' part of the logic is not that great of a typical chip and a flip-flop uses the same logic (gates) as the rest of the chip logic. Meaning that using flash would be an extra overhead with little benefit.
Some additional take-away is that the startup up and shutdown of chips is usually the most destructive time. Often poor hardware designers do not put proper voltage supervision and some lines maybe floating with the expectation that system programs will set them immediately. Reset events, ESD, over heating, etc will all be more harmful than just the act of writing a peripheral register.
Note 1: EEPROM typically has 100,000+ cycles. These features are typically only used once at manufacture time to set a chip configuration for the system. These are actually quite rare, but possible.
The MLC (multi-level) NAND flash in SSD has pathetically low cycles like 8,000 in some cases. The SLC (single level) old school flash have 10,000+ cycles, but people demand large data formats.

Testing Real Time Operating System for Hardness

I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.
What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?
Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.
With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.
To clarify Bob's answer maybe:
Use the signal generator to generate a pulse at some varying frequency.
Random distribution across some range would be best.
use the signal generator (trigger signal) to start the scope.
the RTOS has to respond, do it thing and send an output pulse.
feed the RTOS output into input 2 of the scope.
get the scope to persist/collect mode.
get the scope to start on A , stop on B. if you can.
in an ideal workd, get it to measure the distribution for you. A LeCroy would.
Start with a much slower trace than you would expect. You need to be able to see slow outliers.
You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS.
(This won't really happen in practice, but if you don't get outliers it is reasonably useful. )
If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work.
Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve.
Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.
If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.
Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)
Link from this Question to the paper when you've written it.
Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.
Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.
In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.
You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.
This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks.
Running a full RTOS with heavy computational load you probably only get soft real-time.
Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.
Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.
These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.
Hope this helps
I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.
I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.
I think that this is not a hard real-time device, since it runs no RTOS.
I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.
Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.
The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.
Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.
Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.
So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.
P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.

Lighting Control with the Arduino

I'd like to start out with the Arduino to make something that will (preferably) dim my room lights and turn on some recessed lighting for my computer when a button or switch is activated.
First of all, is this even possible with the Arduino?
Secondly, how would I switch on and off real lights with it? Some sort of relay, maybe?
Does anyone know of a good tutorial or something where at least parts of this are covered? I'll have no problems with the programming, just don't know where to start with hardware.
An alternative (and safer than playing with triacs – trust me I've been shocked by one once and that's enough!) is to use X-10 home automation devices.
There is a PC (RS232) device (CM12U UK or CM11 US) you can get to control the others. You can also get lamp modules that fit between your lamp and the wall outlet which allows you to dim the lamp by sending signals over the mains and switch modules which switch loads on and off.
The Arduino has a TTL level RS232 connector (it's basically what the USB connection uses) – Pins 0 and 1 on the Diecimila so you could use that, connect it via a level converter which you can buy or make and connect to the X-10 controller, theirs instructions on the on the Arduino website for making a RS232 port.
Alternatively you could use something like the FireCracker for X-10 which uses 310MHz (US) or 433MHz (UK) and have your Arduino send out RF signals which the TM12U converts into proper X-10 mains signals for the dimmers etc.
In the US the X-10 modules are really cheep as well (sadly not the case in the UK).
Most people do it using triacs. A triac is like two diodes in anti-parallel (in parallel, but with their polarity reversed) with a trigger pin. A triac conducts current in either direction only when it's triggered. Once triggered, it acts as a regular diode, it continues to conduct until the current drops bellow its threshold.
You can see it as a bi-directional switch on a AC line and can vary the mean current by triggering it in different moments relative to the moment the AC sine-wave crosses zero.
Roughly, it works like this: At the AC sine-wave zero, your diodes turn off and your lamp doesn't get any power. If you trigger the diodes, say, halfway through the sine's swing, you lamp will get half the normal current it would get, so it lights with half of it's power, until the sine-wave crosses zero again. At this point you start over.
If you trigger the triac sooner, your lamp will get current for a longer time interval, glowing brighter. If you trigger your triac latter, your lamp glows fainter.
The same applies to any AC load.
It is almost the same principle of PWM for DC. You turn your current source on and off quicker than your load can react, The amount of time it is turned on is proportional to the current your load will receive.
How do you do that with your arduino?
In simple terms you must first find the zero-crossing of the mains, then you set up a timer/delay and at its end you trigger the triac.
To detect the zero-crossing one normally uses an optocoupler. You connect the led side of the coupler with the mains and the transistor side with the interrupt pin of your arduino.
You can connect your arduino IO pins directly to the triacs' triggers, bu I would use another optocoupler just to be on the safe side.
When the sine-wave approaches zero, you get a pulse on your interrupt pin.
At this interrupt you set up a timer. the longer the timer, the less power your load will get. You also reset your triacs' pins state.
At this timers' interrupt you set your IO pins to trigger the triacs.
Of course you must understand a little about the hardware side so you don't fry your board, and burn your house,
And it goes without saying you must be careful not to kill yourself when dealing with mains AC =).
HERE is the project that got me started some time ago.
It uses AVRs so it should be easy to adapt to an arduino.
It is also quite complete, with schematics.
Their software is a bit on the complex side, so you should start with something simpler.
There is just a ton of this kind of stuff at the Make magazine site. I think you can even find some examples of similar hacks.
I use MOSFET for dimming 12V LED strips using Arduino. I chose IRF3710 for my project with a heat sink to be sure, and it works fine. I tested with 12V halogen lamp, it worked too.
I connect PWM output pin from Arduino directly to mosfet's gate pin, and use analogWrite in code to control brightness.
Regarding 2nd question about controlling lights, you can switch on/off 220V using relays, as partially seen on my photo, there are many boards for this, I chose this:
As a quick-start, you can get yourself one of those dimmerpacks (50-80€ for four lamps).
then build the electronics for the arduino to send DMX controls:
Arduino DMX shield
You'll get yourself both the arduino-expirience + a good chance of not frying your surrounding with higher voltage..