I am working on interfacing a micro-sd card with an 8-bit micro-controller. I do not yet have the hardware so I can not find the solution with an o-scope.
I have been reading the Physical Layer Simplified Specification Version 3.01 and I found this:
As long as the card is busy programming, a
continuous stream of busy tokens will be sent to the host (effectively holding the DataOut line low). (p. 117)
This seems to contradict all the examples I have seen, unless there is a logic inversion I am unaware of. In all snippets and discussions I have come across the card is busy while the micro-controller reads 0xFF.
Is there a definitive guide on using the SD card in SPI mode? It seems as though the previously mentioned document could have discussed the SPI voltage levels more clearly. Thank you in advance for your help,
Sam
In all snippets and discussions I have come across the card is busy while the micro-controller reads 0xFF.
You are very wrong on this one. The card sends a 0xFF value when it is ready, any other value means it is busy.
Related
I'd like to test a SPI slave on my STM32F4 via JTAG boundary scanning methods (best would be using OpenOCD, instead of other special tool).
Does somebody know details and typical pitfalls of such thing?
What I found was this site, whereas this neatly explains boundary scanning.
I am thankful for any hint on that topic.
As the linked site points out, testing your µC's output on the SPI pins through boundary scan will suffer from very low speed (because you have to feed the corresponding bit-banging commands through the boundary-scan protocol, which is far from efficient).
Using the STM32F4 controller, I therefore suggest you to keep the CPU in debug (break), and to set up the GPIOs and SPI through JTAG (as if the firmware were doing this from inside). Then, you are free to put entire data bytes/words into the TX register and poll the SPI status and RX register. This is one or two levels above the (plain) boundary scan method, but it will be quite easy to implement.
(Only) if you want to take this idea even some steps further, you can use JTAG first to switch the clock settings to higher speed or to add DMA (and write larger amounts of data to RAM before triggering the SPI transfer).
We have a system based around an Atom Z510/Intel SCH US15W Q7 card (running Debian Linux.) We need to transfer blocks of data from a device on the Low Pin Count Bus. As far as I know this chipset does not provide DMA facilities, meaning the processor has to read the data out a byte at a time in a software loop. (The device driver actually implements this using the "rep insb" x86 instructions so the loop is actually implemented by the CPU if I understand correctly.)
This is far from optimal, but it should be possible to hit a transfer rate of 14Mb/s. Instead we can barely manage 4Mb/s with transactions on the bus no closer than 2us apart even though each read to the slave device is is done in 560ns. I don't believe other traffic on the bus is to blame, but am still investigating.
My question is:
Does any one know if there are any configuration registers on the SCH that could affect the LPC bus timing?
I cannot find any useful information on the device on the Intel website, nor have I spotted anything in the Linux Kernel code that appears to be fiddling with any such registers (but I'm a noob when it come to Linux Kernel stuff.)
I'm not an x86 expert so any other factors that might come into play or any other 'war stories' relating to this device would be good to know about too.
Edit: I have found the datasheet. I've not seen anything in it that explains this behaviour, but I am investigating the possibility of mapping our device as a firmware device as the firmware bus cycles don't seem to suffer the same delays.
For the record, the solution was to modify the FPGA firmware such that the chip's data in/out register was mapped to four adjacent addresses and the driver modified to do 32 bit inb/outb instructions. Although the SCH does not implement 32 bit LPC read/write operations, the result is 4 back-to-back 8 bit operations followed by the same dead time as I was getting previously with a single byte, meaning it averages about 1us per byte. Not ideal, but still a doubling in throughput.
It transpires the firmware cycles were quicker because the SCH transfers 64 bytes at a time from the firmware flash - after 64 bytes there is the same 1.4us gap, indicating this is the per-transaction latency of the device. Exploiting this may have been slightly quicker than the above solution however the trade-off is that it is limited to 64 bytes chunks and each byte takes longer (680ns IIRC) due to the additional cycles required to do a firmware read.
I have a photocell that gives me the intensity of light in voltage. I want to add a unique number (that I can hard-code on the chip) along with the photocell info and send in a format I can read using a digital computer (Arduino). Any suggestion when I can start?
Sounds like you want to shop for a cheap micro controller that's easy to work with, has an ADC, and a small amount of flash memory. Almost every silicon vendor will claim to have something meeting that requirement, what you are familiar with and can easily buy in appropriate quantities may matter as much as the technical details of the offering. If you already have an arduino, another atmel part such as one of the 8-pin attiny's might be attractive.
You write a little program with a loop that reads the photocell through the ADC, bundles it with an ID number which you store in the flash beside your program, and ships it off through something like a serial port to whatever system needs the information.
Serial port can be a UART peripheral or bit-banged in a software timing loop. For short runs people often skip the line driver/receiver on each end and signal at logic voltage rather than the higher voltage (and inverted sense, the drivers/receivers invert for you) of the RS232 spec. Or you can use other schemes, synchronous ones like SPI or I2C being popular as well.
You might want to look at maxim one-wire bus devices.
They share a bus connection and ground and your ardunio can interrogate the bus.
Each device has a unique identifer and can be read.
The DS2438 is a cheap member of the family that can measure voltage.
(The DS2450 was a quad A/D converter but it was a buggy chip and is now obsolete.)
Ardunio drivers at http://www.arduino.cc/playground/Learning/OneWire
I saw a lot of information about MMC/SD cards and I tried to make a library to read this (modifying the Procyon AVRlib).
But I have some problems here. I don't change the original code and tried here. My problem is about the initialization of an SD card. I have two here, a 256 MB and another 1 GB.
I send the init commands in this order: CMD0, CMD55, ACMD41, and CMD1.
But the 256 MB SD card only returns a 0x01 response for each command. I send the CMD1 a lot of times and the 256 MB SD card always returns only 0x01, never 0x00.
The 1 GB SD is more crazy... CMD0 returns with 0x01. Nice, but the CMD55 command responds with 0x05. At other times it responds with 0xC1 and also sometimes responds with 0xF0 with a 0x5F in the next interation...
Around the Internet there is information and examples, but it is a bit confused. Here in my project, I must use a 1 GB card and I'm trying with a microSD card with an SD adapter (I think that this is not the problem).
How do I fix this problem?
PS: My problem is like the problem in Stack Overflow question Initializing SD card in SPI issues, but the solution didn't solve my problem. The 1 GB SD card only returns 0x01 ever... :cry:
Why do you need CMD1? And did you read the note below it, that says "CMD1 is a valid command for the thin (1.4 mm) standard size SD memory card only if used after re-initializing a card (not after power on reset)."?
About the 1 GB card, ideas that come to mind:
After every command (send command, get reply), do you send 8 dummy bytes before making CS high?
The values returned seem weird (0x05 doesn't have busy bit set, so WTF?), maybe there's a hardware issue?
Does the card work otherwise?
Maybe this helps a bit:
SD Specifications Part 1 Physical LayerSimplified Specification
A simple explanation of MMC/SD usage over SPI is provided here. I have used the associated FAT file-system library too and it works well.
However, the solution may not work for some makes of cards. For such cards, you may have to edit the procedure/library. That may be why your 1 GB card acts differently -- it may be a different make of card. The SPI mode of certain cards may not be that popular for commercial equipment, and thus may be more deviated in specification by some card manufacturers.
If you bit bang the commands and clocks, you may have more control and confidence that those procedures are correct. That is useful because you need some solid ground to build on to progress bit by bit. I found that the <400 kHz 80 clocks was critical on one card, but could run at more than 2 MHz on another.
Try to progress one command at a time that is reliable for both cards.
Disclaimer: I am (mostly) hardware ignorant. This is probably my problem. However I find it hard to accept that it is not possible to debug hardware so therefore I just wanted to get some second opinions.
We have an issue. Where certain actions (swapping Usb devices in and out at run-time) can blow either the Usb hub or chip on our Usb board (it's custom hardware). It's a fuzzy problem (it appears that the degree of "blownness" can vary a bit) and the problem manifests itself in intermittent fashions with various symptoms that are very difficult to reliably reproduce (typically random corruption of packets).
This results in difficulty in ascertaining if a newly reported problem is due to this hardware fault or is actually a bug in the software. We have since implemented protection on these devices but if an unprotected device is used with a protected device it has a possibility of then tainting the (now protected) device. One of the ports is also not protected meaning that someone could still "kill off" a unit that should be safe by accidentally using the wrong port.
The upshot of this is that it is impossible to tell which of our devices suffer this issue without completely replacing ALL the hardware (we've bitten the bullet for most of our production hardware but there is still a lot of dev and QA hardware out there with this issue).
I would imagine that it could be possible, given a piece of hardware that one could use some kind of hardware diagnostics tools to determine whether the kit is faulty or not. Am I living in a dream world? My hardware department tell me that the only tests that can prove the fault would be software tests... but as I have stated the symptoms are very difficult to reproduce. As I'm not that experienced with hardware I don't know if this is the only answer or not. I therefore ask the world.
Built In Test Equipment is used for performing a Built In Test
BITE for BIT
(No bytes involved.)
It is completely, utterly normal for military/aerospace equipment to have extra hardware to test itself with.
The original IBM PC hard a surprising quantity of test hardware built in.
In the case of your equipment, a test device and some statistical analysis would do the trick.
This could be done in hardware in a dongle, but frankly would be easier to with some software.
Use two back-to-back USB to RS232 serial converters to make a USB loopback device.
Send lots of data , checksum packets and measure error rates.
I'm assuming your errors occur on the in->out as well as the out-<in side.
Really, your hardware guys need to look at some application notes; USB IS hotplug-safe IF done according to the book.
There is a cool example out on the net of opto-coupling a USB chip's connection to the board it's onto prevent this sort of thing. The USB chip is connected to the host, powered from the host, and the interface to the USB chip is SPI, which is opto-coupled back to the rest of the board.
As for you, the chips are failing partially. Injured devices may work fine for months then die. An electro-static discharge ("a static zap") can do the same thing that you describe. A device can be injured by shocks too small for you to feel.
The wires and features in semiconductors are microscopic, and easily damaged by stray electricity.
If the hardware design is mostly right probably the liekly cause of the problems you've been experiancing is ESD when the devices are handled to plug/unplug. Your devie has it's own power supply and it's ground voltage floats relative to the other end of the USB cable, until it is connected.
Hope this helps.
No it's not.
A lot of hardware manufacturers begin with hardware testing. Inputs and outputs (IO) is just a matter of evaluating where circuit flow is going. Consider the abstraction that both software and hardware deal in boolean operations.
Hardware is just a little less human readable!
When it comes down to it, hardware's line of communication is (at its most basic) HIGH and LOW through various pins.
I have a brother (in the automobile tech industry) who has used and electrometer to measure voltage on pins to isolate where the problem is (I'm not really smart enough in that field to go into more detail on how he does it).
Your problem is that the only known symptom is so hard to detect (packet corruption in USB stream), that you're going to need software (at some level) to detect it.
If you can work out why packets are getting corrupted (bad voltages?) then maybe you could detect that with hardware?
Otherwise you need some kind of robust testing kit, and software to send/receive lots of packets to look for corruption?
No. That's what oscilloscopes and logic analyzers are for. Also there is more specialized equipment such as USB testers.
The simpler the hardware is, and the more access you have to the signals, the more likely you are to be able to diagnose it in a 'purely hardware' kind of way. For example if you had a simple parallel port card plugged into a PCI slot, it would be relatively straightforward to put a bus analyzer on the PCI bus, and the adapter's output, and see if the outputs did the right thing when the card was addressed. But note you'd still need to attempt to access that card from the PCI bus, which would mean either (A) some kind of PCI bus simulation, which would be one heck of a big pile of test hardware, or (B) a cheap off-the-shelf PC with a few lines of test code.
But then at the other end of the spectrum, suppose you're dealing with a large FPGA. You can get one heck of a lot of logic into an FPGA, and you won't necessarily have access to all the test points you'd like. I've personally encountered a bug with a serial port embedded in an FPGA, where a race condition with the shift register preload register would occasionally corrupt a byte. Hypothetically the VHDL could have been reworked to bring out test points, and a pile of scopes and analyzers gathered, but from a management standpoint it was much more cost effective to try to tease the problem out with software. Under normal usage, the bug in question would have turned up once every blue moon. We iterated through speculation about the conditions that would elicit the bug, and refining the test code, until we had test software that could reproduce the bug 2-3 times a minute. At that point we could actually provide clues to the VHDL guys that helped them fix the problem quickly.
Long story short, inside of a week a hardware bug was smoked out via software, whereas starting with the same information and going 'hardware only' would likely have not been any faster, and would have required a lot of expensive test equipment. So, yeah, you probably can do it without software, but as usual it's a trade-off, and you have to find the right balance point between the amount of software vs hardware for the job.