Currently I am testing some RTL, I am using ncverilog, and it is very ... very slow. I have heard that, if we use some kind of FPGA boards, then things will be faster. Is it for real?
You're talking about two different things.
NCVerilog is a simulation tool while an FPGA board is real hardware. So, there will be differences. Real hardware will be generally faster but with a simulator, you can have all sorts of debugging fun. Trying to probe a specific signal is just a matter of adding a line to the testbench. Also, you can easily make changes to the simulated model instead of having to redesign the FPGA board.
If you run simulation on a sufficiently powerful machine, you can sometimes approximate real-world performance (assuming that the FPGA is a slow one).
All in all, you should do both. Use a simulator to do your basic development and evaluation. Move onto your FPGA hardware once your design is sufficiently well defined.
We've had the same issues with simulation speed too. However, we stick with simulations for the majority of our verification. Each sim checks a specific function and are much quicker than system-level sims. We've also made them self-checking and are useful for regressions tests (unit-tests).
For long system tests on real-world signals that take too much time to simulate, we move these to the FPGA if we can. We need to manually re-check all these testcases again after code changes, so it can be slow in its own way.
Sometimes though, FPGAing a design is just not feasible. Sometimes full designs are too large to fit into an FPGA, or the clock rate is too high. But remember that you don't necessarily have to FPGA your entire design, it may be enough to get the important block you're interested in and check this out fully.
You can trace activity on signals in a running FPGA design using "embedded logic analyzer" software tools like Altera SignalTap or Xilinx ChipScope. Before synthesizing/mapping your RTL to the device, you would use these tools to attach soft probes to the signals you want to watch. You can set triggers so that a signal's values only get logged under certain conditions. Then you generate the bitfile and program the device with JTAG. The logic analyzer communicates with your PC over JTAG and logs activity on your probes, which you can then analyze.
It's a bit complicated to set up, as these tools are not especially easy to use, but you will get results much faster than with RTL simulation.
What kind of RTL are you testing ? If you use FPGA boards, then you can compile
your code provided you have the right tool for the right FPGA. Since FPGA are reprograammable, then of course you can test your code on the board, and have the target (FPGA) execute your code (RTL)
But it is no more a simulation, it is a test, with a given hardware, at a given clock speed.
And you don't get nice result on the screen, you need to use physical probe and scope. Plus you don't get to see how the internal of your code is working.
verilog or VHDL simulation is sort of like running code using a debugger. FPGA testing is more like debugging with printf. The big difference is that when simulating, your CPU has to simulate the behaviour of all those logic gate that results of your code. On the FPGA, there is no simulation, you just 'run' the code, so it is much faster, but you have less information.
You should use simulation for very small components, and then test your whole program on a FPGA.
You're probably asking about hardware simulation accelerators.
Here is one of them : GateRocket
Related
I would like to have Arduino operating in a CAN network. Does the software that provides OSI model network layer exist for Arduino? I would imagine detecting the HI/LOW levels with GPIO/ADC and sending the signal to the network with DAC. It would be nice to have that without any extra hardware attached. I don't mind to have a terminating resistor required by the CAN network though.
By Arduino I mean any of them. My intention is to keep the development environmen.
If such a software does not exist, is there any technical obstacle for that, like limited flash size (again, I don't mean particular board with certain Atmega chip).
You can write a bit banging CAN driver, but it has many limitations.
First it's the timeing, it's hard to achieve the bit timing and also the arbitration.
You will be able to get 10kb or perhaps even 50kb but that consumes a huge amount of your cpu time.
And the code itself is a pain.
You have to calculate the CRC on the fly (easy) but to implement the collision detection and all the timing parameters is not easy.
Once, I done this for a company, but it was a realy bad idea.
Better buy a chip for 1 Euro and be happy.
There are several CAN Bus Shield boards available (e.g: this, and this), and that would be a far better solution. It is not just a matter of the controller chip, the bus interface, line drivers, and power all need to be considered. If you have the resources and skills you can of course create your own board or bread-board for less.
Even if you bit-bang it via GPIO you would need some hardware mods I believe to handle bus contention detection, and it would be very slow and may not interoperate well with "real" CAN controllers on the bus.
If your aim is to communicate between devices of your own design rather than off-the shelf CAN devices, then you don't need CAN for that, and something proprietary will suffice, and a UART will perform faster that a bit-banged CAN implementation.
I don't think, that such software exists. CAN bus is more complex, than for example I2C. Basically you would have to implement functionality of both CAN controller and CAN transceiver. See this thread for more details (in German).
Alternatively you could use one of the CAN shields. Another option were to use BeagleBone with suitable CAN cape.
Also take a look at AVR-CAN.
I'm in the processes of buying a new data acquisition system for my company to use for various projects. At first, it's primary purpose will be to monitor up to 20 thermocouples and control the temperature of a composites oven. However, I also plan on using it to monitor accelerometers, strain gauges, and to act as a signal generator.
I probably won't be the only one to use it, but I have a good bit of programming experience with Atmel microcontrollers (C). I've used LabVIEW before, but ~5 years ago. LabVIEW would be good because it is easy to pick up on for both me and my coworkers. On the flip side, it's expensive. Right now I have a NI CompactDAQ system with 2 voltage and one thermocouple cards + LabVIEW speced out and it's going to cost $5779!
I'm going to try to get the same I/O capabilities with different NI hardware for less $ + LabVIEW to see if I can get it for less $. I'd like to see if anyone has any suggestions other than LabVIEW for me.
Thanks in advance!
Welcome to test and measurement. It's pretty expensive for pre-built stuff, but you trade money for time.
You might check out the somewhat less expensive Agilent 34970A (and associated cards). It's a great workhorse for different kinds of sensing, and, if I recall correctly, it comes with some basic software.
For simple temperature control, you might consider a PID controller (Watlow or Omega used to be the brands, but it's been a few years since I've looked at them).
You also might look into the low-cost usb solutions from NI. The channel count is lower, but they're fairly inexpensive. They do still require software of some type, though.
There are also a fair number of good smaller companies (like Hytek Automation) that produce some types of measurement and control devices or sub-assemblies, but YMMV.
There's a lot of misconception about what will and will not work with LabView and what you do and do not need to build a decent system with it.
First off, as others have said, test and measurement is expensive. Regardless of what you end up doing, the system you describe IS going to cost thousands to build.
Second, you don't NEED to use NI hardware with LabView. For thermocouples your best bet is to look into multichannel or multiple single-channel thermocouple units - something that reads from a thermocouple and outputs to something like RS-232, etc. The OMEGABUS Digital Transmitters are an example, but many others exist.
In this way, you need only a breakout card with lots of RS-232 ports and you can grow your system as it needs it. You can still use labview to acquire the data via RS-232 and then display, log, process, etc, it however you like.
Third party signal generators would also work, for example. You can pick up good ones (with GPIB connection) reasonably cheaply and with a GPIB board can integrate it into LabView as well. This if you want something like a function generator, of course (duty cycled pulses, standard sine/triangle/ramp functions, etc). If you're talking about arbitrary signal generation then this remains a reasonably expensive thing to do (if $5000 is our goalpost for "expensive").
This also hinges on what you're needing the signal generation for - if you're thinking for control signals then, again, there may be cheaper and more robust opitons available. For temperature control, for example, separate hardware PID controllers are probably the best bet. This also takes care of your thermocouple problem since PID controllers will typically accept thermocouple inputs as well. In this way you only need one interface (RS-232, for example) to the external PID controller and you have total access in LabView to temperature readings as well as the ability to control setpoints and PID parameters in one unit.
Perhaps if you could elaborate on not just the system components as you've planned them at present, but the ultimaty system functionality, it may be easier to suggest alternatives - not simply alternative hardware, but alternative system design altogether.
edit :
Have a look at Omega CNi8C22-C24 and CNiS8C24-C24 units -> these are temperature and strain DIN PID units which will take inputs from your thermocouples and strain gauges, process the inputs into proper measurements, and communicate with LabView (or anything else) via RS-232.
This isn't necessarily a software answer, but if you want low cost data aquisition, you might want to look at the labjack. It's basically a microcontroller & usb interface wrapped in a nice box (like an arduino (Atmel AVR + USB-Serial converter) but closed source) with a lot of drivers and functions for various languages, including labview.
Reading a thermocouple can be tough because microvolts are significant, so you either need a high resolution A/D or an amplifier on the input. I think NI may sell a specialized digitizer for thermocouple readings, but again you'll pay.
As far as the software answer, labview will work nicely with almost any hardware you choose -- e.g. I built my own temperature controller based on an arduino (with an AD7780) wrote a little interface using serial commands and then talked with it using labview. But if you're willing to pay a premium for a guaranteed to work out of the box solution, you can't go wrong with labview and an NI part.
LabWindows CVI is NI's C IDE, with good integration with their instrument libraries and drivers. If you're willing to write C code, maybe you could get by with the base version of LabWindows CVI, versus having to buy a higher-end LabView version that has the functionality you need. LabWindows CVI and LabView are priced identically for the base versions, so
that may not be much of an advantage.
Given the range of measurement types you plan to make and the fact that you want colleagues to be able to use this, I would suggest LabVIEW is a good choice - it will support everything you want to do and make it straightforward to put a decent GUI on it. Assuming you're on Windows then the base package should be adequate and if you want to build stand-alone applications, either to deploy on other PCs or to make a particular setup as simple as possible for your colleagues, you can buy the application builder separately later.
As for the DAQ hardware, you can certainly save money - e.g. Measurement Computing have a low cost 8-channel USB thermocouple input device - but that may cost you in setup time or be less robust to repeated changes in your hardware configuration for different tests.
I've got a bit of experience with LabView stuff, and if you can afford it, it's awesome (and useful for a lot of different applications).
However, if your applications are simple you might actually be able to hack together something with one or two arduino's here, it's OSS, and has some good cheap hardware boards.
LabView really comes into its own with real time applications or RAD (because GUI dev is super easy), so if all you're doing is running a couple of thermopiles I'd find something cheaper.
A few thousand dollars is not a lot of money for process monitoring and control systems. If you do a cost/benefit analysis, you will very quickly recover your development costs if the scope of the system is right and if it does the job it is intended to do.
Another tool to consider is National Instruments measurement studio with VB .NET. This way you can still use the NI hardware if you want and can still build nice gui's quickly.
Alternatively, as others have said, it is perfectly viable to get industrial serial based instruments and talk to them with LabVIEW, VB .NET, c# or whatever you like.
If you go down the route of serial instruments, another piece of hardware that might be useful is a serial terminal (example). These allow you to connect arbitrary numbers of devices to your network. You computers can then use them as though they were physical COM ports.
Have you looked at MATLAB. They have a toolbox called Data Acquisition. compactDAQ is a supported hardware.
LabVIEW is a great visual programming environment. In terms if we want to drag,drop and visualize our system. NI Hardware also comes with the NIDAQmx Library which can be accessed through our code. Probably a feasible solution for you would be to import the libraries into another programming language and write code for all the activities which otherwise you were going to perform using LabVIEW. Though other overheads like code optimization would be the users responsibility, you are free to tweak the normal method flow, by introducing your own improvements at suitable junctures in the DAQ process.
Disclaimer: I am (mostly) hardware ignorant. This is probably my problem. However I find it hard to accept that it is not possible to debug hardware so therefore I just wanted to get some second opinions.
We have an issue. Where certain actions (swapping Usb devices in and out at run-time) can blow either the Usb hub or chip on our Usb board (it's custom hardware). It's a fuzzy problem (it appears that the degree of "blownness" can vary a bit) and the problem manifests itself in intermittent fashions with various symptoms that are very difficult to reliably reproduce (typically random corruption of packets).
This results in difficulty in ascertaining if a newly reported problem is due to this hardware fault or is actually a bug in the software. We have since implemented protection on these devices but if an unprotected device is used with a protected device it has a possibility of then tainting the (now protected) device. One of the ports is also not protected meaning that someone could still "kill off" a unit that should be safe by accidentally using the wrong port.
The upshot of this is that it is impossible to tell which of our devices suffer this issue without completely replacing ALL the hardware (we've bitten the bullet for most of our production hardware but there is still a lot of dev and QA hardware out there with this issue).
I would imagine that it could be possible, given a piece of hardware that one could use some kind of hardware diagnostics tools to determine whether the kit is faulty or not. Am I living in a dream world? My hardware department tell me that the only tests that can prove the fault would be software tests... but as I have stated the symptoms are very difficult to reproduce. As I'm not that experienced with hardware I don't know if this is the only answer or not. I therefore ask the world.
Built In Test Equipment is used for performing a Built In Test
BITE for BIT
(No bytes involved.)
It is completely, utterly normal for military/aerospace equipment to have extra hardware to test itself with.
The original IBM PC hard a surprising quantity of test hardware built in.
In the case of your equipment, a test device and some statistical analysis would do the trick.
This could be done in hardware in a dongle, but frankly would be easier to with some software.
Use two back-to-back USB to RS232 serial converters to make a USB loopback device.
Send lots of data , checksum packets and measure error rates.
I'm assuming your errors occur on the in->out as well as the out-<in side.
Really, your hardware guys need to look at some application notes; USB IS hotplug-safe IF done according to the book.
There is a cool example out on the net of opto-coupling a USB chip's connection to the board it's onto prevent this sort of thing. The USB chip is connected to the host, powered from the host, and the interface to the USB chip is SPI, which is opto-coupled back to the rest of the board.
As for you, the chips are failing partially. Injured devices may work fine for months then die. An electro-static discharge ("a static zap") can do the same thing that you describe. A device can be injured by shocks too small for you to feel.
The wires and features in semiconductors are microscopic, and easily damaged by stray electricity.
If the hardware design is mostly right probably the liekly cause of the problems you've been experiancing is ESD when the devices are handled to plug/unplug. Your devie has it's own power supply and it's ground voltage floats relative to the other end of the USB cable, until it is connected.
Hope this helps.
No it's not.
A lot of hardware manufacturers begin with hardware testing. Inputs and outputs (IO) is just a matter of evaluating where circuit flow is going. Consider the abstraction that both software and hardware deal in boolean operations.
Hardware is just a little less human readable!
When it comes down to it, hardware's line of communication is (at its most basic) HIGH and LOW through various pins.
I have a brother (in the automobile tech industry) who has used and electrometer to measure voltage on pins to isolate where the problem is (I'm not really smart enough in that field to go into more detail on how he does it).
Your problem is that the only known symptom is so hard to detect (packet corruption in USB stream), that you're going to need software (at some level) to detect it.
If you can work out why packets are getting corrupted (bad voltages?) then maybe you could detect that with hardware?
Otherwise you need some kind of robust testing kit, and software to send/receive lots of packets to look for corruption?
No. That's what oscilloscopes and logic analyzers are for. Also there is more specialized equipment such as USB testers.
The simpler the hardware is, and the more access you have to the signals, the more likely you are to be able to diagnose it in a 'purely hardware' kind of way. For example if you had a simple parallel port card plugged into a PCI slot, it would be relatively straightforward to put a bus analyzer on the PCI bus, and the adapter's output, and see if the outputs did the right thing when the card was addressed. But note you'd still need to attempt to access that card from the PCI bus, which would mean either (A) some kind of PCI bus simulation, which would be one heck of a big pile of test hardware, or (B) a cheap off-the-shelf PC with a few lines of test code.
But then at the other end of the spectrum, suppose you're dealing with a large FPGA. You can get one heck of a lot of logic into an FPGA, and you won't necessarily have access to all the test points you'd like. I've personally encountered a bug with a serial port embedded in an FPGA, where a race condition with the shift register preload register would occasionally corrupt a byte. Hypothetically the VHDL could have been reworked to bring out test points, and a pile of scopes and analyzers gathered, but from a management standpoint it was much more cost effective to try to tease the problem out with software. Under normal usage, the bug in question would have turned up once every blue moon. We iterated through speculation about the conditions that would elicit the bug, and refining the test code, until we had test software that could reproduce the bug 2-3 times a minute. At that point we could actually provide clues to the VHDL guys that helped them fix the problem quickly.
Long story short, inside of a week a hardware bug was smoked out via software, whereas starting with the same information and going 'hardware only' would likely have not been any faster, and would have required a lot of expensive test equipment. So, yeah, you probably can do it without software, but as usual it's a trade-off, and you have to find the right balance point between the amount of software vs hardware for the job.
I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.
What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?
Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.
With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.
To clarify Bob's answer maybe:
Use the signal generator to generate a pulse at some varying frequency.
Random distribution across some range would be best.
use the signal generator (trigger signal) to start the scope.
the RTOS has to respond, do it thing and send an output pulse.
feed the RTOS output into input 2 of the scope.
get the scope to persist/collect mode.
get the scope to start on A , stop on B. if you can.
in an ideal workd, get it to measure the distribution for you. A LeCroy would.
Start with a much slower trace than you would expect. You need to be able to see slow outliers.
You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS.
(This won't really happen in practice, but if you don't get outliers it is reasonably useful. )
If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work.
Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve.
Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.
If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.
Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)
Link from this Question to the paper when you've written it.
Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.
Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.
In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.
You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.
This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks.
Running a full RTOS with heavy computational load you probably only get soft real-time.
Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.
Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.
These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.
Hope this helps
I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.
I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.
I think that this is not a hard real-time device, since it runs no RTOS.
I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.
Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.
The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.
Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.
Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.
So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.
P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.
I would like to create/start a simulator for the following microcontroller board: http://www.sparkfun.com/commerce/product_info.php?products_id=707#
The firmware is written in assembly so I'm looking for some pointers on how one would go about simulating the inputs that the hardware would receive and then the simulator would respond to the outputs from the firmware. (which would also require running the firmware in the simulated environment).
Any pointers on how to start?
Thanks
Chris
Writing a whole emulator is going to be a real challenge. I've attempted to write an ARM emulator before, and let me tell you, it's not a small project. You're going to either have to emulate the entire CPU core, or find one that's already written.
You'll also need to figure out how all the IO works. There may be docs from sparkfun about that board, but you'll need to write a memory manager if it uses MMIO, etc.
The concept of an emulator isn't that far away from an interpreter, really. You need to interpret the firmware code, and basically follow along with the instructions.
I would recommend a good interactive debugger instead of tackling an emulator. The chances of destroying the hardware is low, but really, would you rather buy a new board or spend 9 months writing something that won't implement the entire system?
It's likely that the PIC 18F2520 already has an emulator core written for it, but you'll need to delve into all the hardware specs to see how all the IO is mapped still. If you're feeling up to it, it would be a good project, but I would consider just using a remote debugger instead.
You'll have to write a PIC simulator and then emulate the IO functionality of the ports.
To be honest, it looks like its designed as a dev kit - I wouldn't worry about your code destroying the device if you take care. Unless this a runner-up for an enterprise package, I would seriously question the ROI on writing a sim.
Is there a particular reason to make an emulator/simulator, vs. just using the real thing?
The board is inexpensive; Microchip now has the RealICE debugger which is quite a bit more responsive than the old ICD2 "hockey puck".
Microchip's MPLAB already has a built-in simulator. It won't simulate the whole board for you, but it will handle the 18F2520. You can sort of use input test vectors & log output files, I've done this before with a different Microchip IC and it was doable but kinda cumbersome. I would suggest you take the unit-testing approach and modularize the way you do things; figure out your test inputs and expected outputs for a manageable piece of the system.
It's likely that the PIC 18F2520 already has an emulator core written for it,
An open source, cross-platform simulator for microchip/PICs is available under the name of "gpsim".
It's extremely unlikely that a bug in your code could damage the physical circuitry. If that's possible, then it is either a bug in the board design or it should be very clearly documented.
If I may offer you a suggestion from many years of experience working with these devices: don't program them in assembly. You will go insane. Use C or BASIC or some higher-level language. Microchip produces a C compiler for most of their chips (dunno about this one), and other companies produce them as well.
If you insist on using an emulator, I'm pretty sure Microchip makes an emulator for nearly every one of their microcontrollers (at least one from each product line, which would probably be good enough). These emulators are not always cheap, and I'm unsure of their ability to accept complex external input.
If you still want to try writing your own, I think you'll find that emulating the PIC itself will be fairly straightforward -- the format of all the opcodes is well documented, as is the memory architecture, etc. It's going to be emulating the other devices on the board and the interconnections between them that will kill you. You might want to look into coding the interconnections between the components using a VHDL tool that will allow you to create custom simulations for the different components.
Isn't this a hardware-in-the-loop simulator problem? (e.g. http://www.embedded.com/15201692 )