Disclaimer: I am (mostly) hardware ignorant. This is probably my problem. However I find it hard to accept that it is not possible to debug hardware so therefore I just wanted to get some second opinions.
We have an issue. Where certain actions (swapping Usb devices in and out at run-time) can blow either the Usb hub or chip on our Usb board (it's custom hardware). It's a fuzzy problem (it appears that the degree of "blownness" can vary a bit) and the problem manifests itself in intermittent fashions with various symptoms that are very difficult to reliably reproduce (typically random corruption of packets).
This results in difficulty in ascertaining if a newly reported problem is due to this hardware fault or is actually a bug in the software. We have since implemented protection on these devices but if an unprotected device is used with a protected device it has a possibility of then tainting the (now protected) device. One of the ports is also not protected meaning that someone could still "kill off" a unit that should be safe by accidentally using the wrong port.
The upshot of this is that it is impossible to tell which of our devices suffer this issue without completely replacing ALL the hardware (we've bitten the bullet for most of our production hardware but there is still a lot of dev and QA hardware out there with this issue).
I would imagine that it could be possible, given a piece of hardware that one could use some kind of hardware diagnostics tools to determine whether the kit is faulty or not. Am I living in a dream world? My hardware department tell me that the only tests that can prove the fault would be software tests... but as I have stated the symptoms are very difficult to reproduce. As I'm not that experienced with hardware I don't know if this is the only answer or not. I therefore ask the world.
Built In Test Equipment is used for performing a Built In Test
BITE for BIT
(No bytes involved.)
It is completely, utterly normal for military/aerospace equipment to have extra hardware to test itself with.
The original IBM PC hard a surprising quantity of test hardware built in.
In the case of your equipment, a test device and some statistical analysis would do the trick.
This could be done in hardware in a dongle, but frankly would be easier to with some software.
Use two back-to-back USB to RS232 serial converters to make a USB loopback device.
Send lots of data , checksum packets and measure error rates.
I'm assuming your errors occur on the in->out as well as the out-<in side.
Really, your hardware guys need to look at some application notes; USB IS hotplug-safe IF done according to the book.
There is a cool example out on the net of opto-coupling a USB chip's connection to the board it's onto prevent this sort of thing. The USB chip is connected to the host, powered from the host, and the interface to the USB chip is SPI, which is opto-coupled back to the rest of the board.
As for you, the chips are failing partially. Injured devices may work fine for months then die. An electro-static discharge ("a static zap") can do the same thing that you describe. A device can be injured by shocks too small for you to feel.
The wires and features in semiconductors are microscopic, and easily damaged by stray electricity.
If the hardware design is mostly right probably the liekly cause of the problems you've been experiancing is ESD when the devices are handled to plug/unplug. Your devie has it's own power supply and it's ground voltage floats relative to the other end of the USB cable, until it is connected.
Hope this helps.
No it's not.
A lot of hardware manufacturers begin with hardware testing. Inputs and outputs (IO) is just a matter of evaluating where circuit flow is going. Consider the abstraction that both software and hardware deal in boolean operations.
Hardware is just a little less human readable!
When it comes down to it, hardware's line of communication is (at its most basic) HIGH and LOW through various pins.
I have a brother (in the automobile tech industry) who has used and electrometer to measure voltage on pins to isolate where the problem is (I'm not really smart enough in that field to go into more detail on how he does it).
Your problem is that the only known symptom is so hard to detect (packet corruption in USB stream), that you're going to need software (at some level) to detect it.
If you can work out why packets are getting corrupted (bad voltages?) then maybe you could detect that with hardware?
Otherwise you need some kind of robust testing kit, and software to send/receive lots of packets to look for corruption?
No. That's what oscilloscopes and logic analyzers are for. Also there is more specialized equipment such as USB testers.
The simpler the hardware is, and the more access you have to the signals, the more likely you are to be able to diagnose it in a 'purely hardware' kind of way. For example if you had a simple parallel port card plugged into a PCI slot, it would be relatively straightforward to put a bus analyzer on the PCI bus, and the adapter's output, and see if the outputs did the right thing when the card was addressed. But note you'd still need to attempt to access that card from the PCI bus, which would mean either (A) some kind of PCI bus simulation, which would be one heck of a big pile of test hardware, or (B) a cheap off-the-shelf PC with a few lines of test code.
But then at the other end of the spectrum, suppose you're dealing with a large FPGA. You can get one heck of a lot of logic into an FPGA, and you won't necessarily have access to all the test points you'd like. I've personally encountered a bug with a serial port embedded in an FPGA, where a race condition with the shift register preload register would occasionally corrupt a byte. Hypothetically the VHDL could have been reworked to bring out test points, and a pile of scopes and analyzers gathered, but from a management standpoint it was much more cost effective to try to tease the problem out with software. Under normal usage, the bug in question would have turned up once every blue moon. We iterated through speculation about the conditions that would elicit the bug, and refining the test code, until we had test software that could reproduce the bug 2-3 times a minute. At that point we could actually provide clues to the VHDL guys that helped them fix the problem quickly.
Long story short, inside of a week a hardware bug was smoked out via software, whereas starting with the same information and going 'hardware only' would likely have not been any faster, and would have required a lot of expensive test equipment. So, yeah, you probably can do it without software, but as usual it's a trade-off, and you have to find the right balance point between the amount of software vs hardware for the job.
Related
I am a physicist, and I had a revelation a few weeks ago about how I might be able to use my personal computer to get much finer control over laboratory experiments than is typically the case. Before I ran off to try this out though, I wanted to check the feasibility with people who have more expertise than myself in such matters.
The idea is to use the i/o ports---VGA, ethernet, speaker jacks, etc.---on the computer to talk directly to the sensors and actuators in the experimental setup. E.g. cut open one side of an ethernet cable (with the other end attached to the computer) and send each line to a different device. I knew a postdoc who did something very similar using a BeagleBone. He wrote some assembly code that let him sync everything with the internal clock and used the GPIO pins to effectively give him a hybrid signal generator/scope that was completely programmable. It seems like the same thing should be possible with a laptop, and this would have the additional benefit that you can do data analysis from the same device.
The main potential difficulty that I foresee is that the hardware on a BeagleBone is designed with this sort of i/o in mind, whereas I expect the hardware on a laptop will probably be harder to control directly. I know for example (from some preliminary investigation, http://ask.metafilter.com/125812/Simple-USB-control-how-to-blink-an-LED-via-code) that USB ports will be difficult to access this way, and VGA is (according to VGA 15 pin port data read and write using Matlab) impossible. I haven't found anything about using other ports like ethernet or speaker jacks, though.
So the main question is: will this idea be feasible (without investing many months for each new variation of the hardware), and if so what type of i/o (ethernet, speaker jacks, etc.) is likely to be the best bet?
Auxiliary questions are:
Where can I find material to learn how I might go about executing this plan? I'm not even sure what keywords to plug in on Google.
Will the ease with which I can do this depend strongly on operating system or hardware brand?
The only cable I can think of for a pc that can get close to this would be a parallel printer cable which is pretty much gone away. It's a 25 wire cable that data is spread across so that it can send more data at the same time. I'm just not sure if you can target a specific line or if it's more of a left to right fill as data is sent.
To use one on a laptop today would definitely be difficult. You won't find any laptops with parallel ports. There are usb to parallel cables and serial to parallel cables but I would guess that the only control you would have it to the usb or serial interface and not the parallel.
As for Ethernet, you have 4 twisted pair with only 2 pair in use and 2 pair that are extra.
There's some hardware that available called Zwave that you might want to look into. Zwave will allow you to build a network of devices that communicate in a mesh. I'm not sure what kind of response time you need.
I actually just thought of something that might be a good solution. Check out security equipment. There's a lot of equipment available for pc's that monitor doors, windows, sensors, etc. That industry might what your looking for.
I think the easiest way would be to use the USB port as a Human Interface Device (HID) and using a custom built PIC program and a PIC that includes the USB functionality to encode the data to be sent to the computer and in that way be able to program it independently from the OS due to the fact that all mayor OS have the HID USB functionality.
Anyways if you used your MIC/VGA/HDMI whatever other port you still need a device to encode the data or transmit it, and another program inside the computer to decode that data being sent.
And remember that different hardware has different software (drivers) that might decode the raw data in other odd ways rendering your IO hardware dependent.
Hope this helps, but thats why the USB was invented in the first place to make it hardware and os independent.
I would like to have Arduino operating in a CAN network. Does the software that provides OSI model network layer exist for Arduino? I would imagine detecting the HI/LOW levels with GPIO/ADC and sending the signal to the network with DAC. It would be nice to have that without any extra hardware attached. I don't mind to have a terminating resistor required by the CAN network though.
By Arduino I mean any of them. My intention is to keep the development environmen.
If such a software does not exist, is there any technical obstacle for that, like limited flash size (again, I don't mean particular board with certain Atmega chip).
You can write a bit banging CAN driver, but it has many limitations.
First it's the timeing, it's hard to achieve the bit timing and also the arbitration.
You will be able to get 10kb or perhaps even 50kb but that consumes a huge amount of your cpu time.
And the code itself is a pain.
You have to calculate the CRC on the fly (easy) but to implement the collision detection and all the timing parameters is not easy.
Once, I done this for a company, but it was a realy bad idea.
Better buy a chip for 1 Euro and be happy.
There are several CAN Bus Shield boards available (e.g: this, and this), and that would be a far better solution. It is not just a matter of the controller chip, the bus interface, line drivers, and power all need to be considered. If you have the resources and skills you can of course create your own board or bread-board for less.
Even if you bit-bang it via GPIO you would need some hardware mods I believe to handle bus contention detection, and it would be very slow and may not interoperate well with "real" CAN controllers on the bus.
If your aim is to communicate between devices of your own design rather than off-the shelf CAN devices, then you don't need CAN for that, and something proprietary will suffice, and a UART will perform faster that a bit-banged CAN implementation.
I don't think, that such software exists. CAN bus is more complex, than for example I2C. Basically you would have to implement functionality of both CAN controller and CAN transceiver. See this thread for more details (in German).
Alternatively you could use one of the CAN shields. Another option were to use BeagleBone with suitable CAN cape.
Also take a look at AVR-CAN.
I am using a serial NOR flash (SPI Based) for my embedded application and also I have to implement a file system over it. That makes my NOR flash more prone to frequent erase and write cycles where having a wear level Algorithm comes into picture. I want to ask few questions regarding the same:
First, is it possible to implement a Wear Level Algorithm for Nor flash, if yes then why most of the time I find the solutions for NAND Flash and not NOR Flash?
Second, are serial SPI based low cost NAND flashes available, if yes then kindly share the part number for the same.
Third, how difficult is it to implement our own Wear Level algorithm?
Fourth, I have also read/heard that industrial grade NOR Flashes have higher erase/write cycles (in millions!!), is this understanding correct? If yes then kindly let me know the details of such SPI NOR Flash, which may also lead to avoiding implementation of wear level algorithm, if not completely then since I'm planning to implement my own wear level algorithm, it might give me a little room and ease in certain areas to implement the wear level algorithm.
The constraint to all these point is cost, I would want to have low cost solution to these issues.
Thanks in Advance
Regards
Aditya Mittal
(mittal.aditya12#gmail.com)
Implementing a wear-levelling algorithm is is not trivial, but not impossible either:
Your wear-levelling driver needs to know when disk blocks are no longer used by the filing system (this is known as TRIM support on modern SSDs). In practice, this means you need to modify your block driver API and filing systems above it, or have the wear-levelling driver aware of the filing system's free-space map. This second option is easy for FAT, but probably patented.
You need to reserve at least an erase-unit + a few allocation units to allow erase unit recycling. Reserving more blocks will increase performance
You'll want a background thread to perform asynchronous erase-unit recycling
You'll need to test, test an test again. When I last built one of these, we built a simulation of both flash and ran the real filing system on top it, and tortured the system for weeks.
There are lots and lots of patents covering aspects of wear-leveling. By the same token, there are two at least two wear-levelling layers in the Linux Kernel.
Given all of this, licensing a third-party library is probably cost-effective,
Atmel/Adesto etc. make those little serial flash chips by the billion. They also have loads of online docs. I suspect that the serial flash beetles don't implement wear-levelling because of cost - the devices they are typically used in are very cheap and tend to have a limited lifetime anyway. Bulk, 4-line NAND flash that is expected to see heavier and lengthy use, (eg. SD cards), have complex, (relatively), built-in controllers that can implement wear-levelling in a transparent manner.
I no longer use one-pin interface serial flash, partly due to the wear issue. An SD-card is cheap enough for me to use and, even if one does break, an on-site technician, (or even the customer), can easily swap it out.
Implementing a wear-levelling algo. is too expensive, both in terms of development time, (especially testing if the device has to support a file system that must not corrupt on power fail etc), and CPU/RAM for me to bother.
If your product is so cost-sensitive that you have to use serial NOR flash, I suggest that you ignore the issue.
I'm not interested in a hardware solution, I want to know about software that may "read" modulated signal received trough the power supply - some sort of a low-level driver that would access the power signal in a convenient place and demodulate it.
Is there a way to receive signal from the computer's power supply? I'm interested in an API or library that would allow the computer to be seen as a node in a Power Line Communication network and receive data directly through the power cable, without the need for a converter. Is there any active research in this field?
Edit:
There is software that reads monitors and displays internal component voltages - DC voltage after being converted and filtered by the power supply - now I need is a method of data encoding that would be invariant to conversion and filtering, the original signal embedded in AC being present in some form within the converted DC signal.
This is not possible, as described in the question. Yes, with extra hardware you can do it. No, with the standard hardware in a PC, you could not.
As others have noted, among other problems, the only information you can get from a generic PC is a bit of voltage info for the CPU. It's not going to give a picture of the AC signal, nor any signal modulated on top of it. You'll be watching a few highly regulated DC signals deep inside the computer, probably converted at a relatively low rate too. Almost by definition, if you could see external information on any of those signals, your machine is already suffering a hardware failure and chances are the CPU will be crashing soon...
*blink* No...
Edit: I mean, there's the possibility to use the powerlines as network cables, but only with special adapters. And it is just designed for home networks.
Edit2: You can't read something from the power supply of a computer...it's not designed for that. You would have to create your own component/adapter for this.
Am I mis-reading this? Wouldnt this be a pure hardware solution?
This is highly improbable without adding some hardware.
You see, the power supplies in a regular PC are switching power supplies which effectively decouple the AC input from the supplied DC voltage needed on the PC side. The AC side just basically provides power that fuels the high-speed power switching circuitry.
Also, a DC signal, by definition, doesn't provide a signal per se: it is a "static" power level (and yes the power level does vary a bit in the time domain but not as an easy to leverage function).
Yes there can be an AD (Analog to Digital) monitoring chip that can be used on the PC side to read the voltage of the DC component supplied to the motherboard etc., but that doesn't mean there is still a signal that can be harvested: the original power line "signal" might have been through enough filters that there isn't a "signal" left to be processed.
Lastly, one needs to consider that power supplies design varies from company to company; this fact will undoubtedly affect any possible design of a communication solution.
what you describe is possible but unfortunately, you need an adapter to convert the signal running on the powerlines to sensible network traffic.
the power line acts as a physical medium, thus is at the lowest level f the OSI stack. conversion from electrical signal to sensible network traffic requires a hardware adapter, same for your an ethernet adapter. your computer is unable to understand this traffic since its power supply was not build to transmit those informations. but note that you can easily find an adapter and it will works the same as an ethernet adapter, that is be accessible through the standard BSD socket library.
This is ENTIRELY possible, although you would need to either buy or build some hardware to make it happen. In addition, the software solution would be very, very complex.
The computer's power supply would be out of the picture for the most part. You need to read data straight from the wall with as little extraneous noise as possible. From the electrical engineering perspective, this is a very thoroughly covered topic. In the end, all you're really doing is an analog to digital conversion, and the rest keeps your circuit from being fried.
The software solution would basically be eliminating random noise, and looking for embedded signals. The math behind analog signal analysis is very complex, and you can spend a few semesters in college covering the topic, and the rest of your career trying to master it. If you're good at it, there's a cushy job for you on wallstreet predicting the stock market.
And that only covers reading incoming signals. Transmitting is a whole 'nother sport.
Now, it also sounds like you might be interested in a hack. That is...
You could buy a
commercial-off-the-shelf power-line
Ethernet adapter and tear it apart.
They have two prongs that plug into
a standard wall outlet. You could
remove these and wire them to the
INSIDE of a power supply.
To do that, you'd have to tear apart a power
supply as well, which is incredibly
dangerous and I hereby warn you and
anyone else to NEVER attempt this.
The entire Ethernet adapter could be
tucked into the power supply and you
could basically have an Ethernet
port on the surface of your power
supply (either inside or outside the
computer).
Simply wire that to a
standard Ethernet adapter and voila
(!), you have nothing but a power
cable connecting your computer to
the wall outlet, AND you magically have
Ethernet!
Note that there also has to be another power-line
Ethernet adapter somewhere else for
you to establish a network and make the whole project useful.
How can you read modulated data from the power supply, you are talking about voltage and ohms and apart from a possible electrical shock which would be just shocking :) There are specialized electrical plugs with ethernet jacks in them that you can use.
I just hazard a guess that this is totally transparent as per Adrien Plisson's answer, i.e. you would have all of the OSI layer and is no different. You can write code to read from the sockets.
AFAIK no company that produces this electrical plug would ever open up the API for competition reasons, it is still in early stages as adoption of that is low because obviously it is very expensive (120 euro here in my country for a pair of 'em), as it does not deliver the quoted speed, say 100Mbps power plug, may get maybe 85Mbps due to varying situations and phenomena with power (think surges, brown outs, interference).
My 2cents.
Hope this helps,
Best regards,
Tom.
ASIDE: Yes, this is can be considered a subjective question, but I hope to draw conclusions from the statistics of the responses.
There is a broad spectrum of computing devices. They range in physical sizes, computational power and electrical power. I would like to know what embedded developers think is the determining factor(s) that makes a system "embedded." I have my own determination that I will withhold for a week so as to not influence the responses.
I would say "embedded" is any device on which the end user doesn't normally install custom software of their choice. So PCs, laptops and smartphones are out, while XM radios, robot controllers, alarm clocks, pacemakers, hearing aids, the doohickey in your engine that regulates fuel injection etc. are in.
You might just start with wikipedia for a definition
http://en.wikipedia.org/wiki/Embedded_system
"An embedded system is a computer system designed to perform one or a few dedicated functions, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. "
Coming up with a concrete set of rules for what an embedded system is is to a large degree pointless. It's a term that means different things to different people -maybe even different things to the same people at different times.
There are some things that are pretty much never considered an embedded system, for example a Windows Desktop machine. However, there are companies that put their software on a Windows box - even a bog standard PC (maybe a laptop) - set things up so their application loads automatically and hides the desktop. They sell that as a single purposed machine that many people would call an embedded system (but many people wouldn't). Microsoft even sells a set of tools called Embedded Windows that helps enable these kinds of applications, though it's targeted more to OEMs who will customize the system at least somewhat instead of just putting it on a standard PC. Embedded Windows is used for things like ATM machines and many other devices. I think that most people would consider an ATM an embedded system.
But go into a 7-11 with an ATM that has a keyboard (I honestly don't know what the keyboard is for), press the right shift key 5 times and you'll get a nice Windows "StickyKeys" messagebox (I wonder if there's an exploit there - I sure hope not). So there's a Windows system there, just hidden and with some functionality removed - maybe not as much as the manufacturer would like. If you could convince it to open up notepad.exe somehow does the ATM suddenly stop being an embedded system?
Many, many people consider something like the iPhone or the iTouch an embedded system, but they have nearly as much functionality as a desktop system in many ways.
I think most people's definition of an embedded system might be similar to Justice Potter Stewart's definition of hard-core pornography:
I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it...
I consider an embedded system one where the software is rarely developed directly on the target system. This definition includes sophisticated embedded systems like the iPhone, and excludes primitive desktop systems like the Commodore 64. Not having the development tools on the target means you have to add 'reprogram device' to the edit-compile-run cycle. Debugging is also made more complicated. This encompasses most of the embedded "feel."
Software implemented in a device not intended as a general purpose computing device is an "embedded system".
Typically the system is intended for a single purpose, and the software is static.
Often the system interacts with non-human environmental inputs (sensors) and mechanical actuators, or communication with other non-human systems.
That's off the top of my head. Other views can be read at this embedded.com article
Main factors:
Installed in a fixed place somewhere (you can't carry the device itself around, only the thing it's built into)
The run a long time (often years) with little maintenance
They don't get patched often
They are small, use little power
Small or no display
+1 for a great question.
Like many things there is a spectrum.
At the "totally embedded" end you have devices designed for a single purpose. Alarm clocks, radios, cameras. You can't load new software and make it do something else. THere is no support for changing the hardware,
At the "totally non-embedded" end you have your classic PCs where everything, both HW and SW, can be replaced.
There's still a lot in between those extremes. Laptops and netbooks, for example, have minimally expandable HW, typically only memory and hard disk can be upgraded. But, the SW can be whatever you want.
My education was as a computer engineer, so my definition of embedded is hardware oriented. I draw the line at the MMU (memory management unit). If a chip has an MMU, it usually has off-chip RAM and runs an OS. If a chip does NOT have an MMU, it usually has on-board RAM and runs an RTOS, microkernel or custom executive.
This means I usually dismiss anything running linux, which is shortsighted. I admit my answer is biased towards where I tend to work: microcontroller firmware. So I am glad I asked this question and got a full spectrum of responses.
Quoting a paragraph I've written before:
An embedded system for our purposes is
a computer system that has a specific
and deterministic
functionality\cite{LamieReal}.
Typically, processors for embedded
systems contain elements such as
onboard RAM, special-purpose
processing elements such as a digital
signal processor, analog-to-digital
and digital-to-analog converters.
Since the processors have more
flexibility than a straightforward
CPU, a common term is microcontroller.