I am struggling to get multiple read for the RHR register working on my SC16IS750 breakout board. I am using the board with I2C standard frequency and
normal single reads and write are working.
The chip has a FiFo which can hold up to 64 characters. The Fig 24 in the Chip Manual (http://www.nxp.com/documents/data_sheet/SC16IS740_750_760.pdf) shows how it should work.
However when initiating such a multiple read only the first character is correctly transfered. All other characters are "0". If eg the FiFO held 16 characters before the transfer it will be empty afterwards.
Here what I do when receiving a FiFO Interrupt from the Chip:
- Read interrupt status register IIR
- Read Line Status register
- Read RX Level register RXLVL - ie number of Charcaters in Fifo
- Read Multiple data:
I2C start, slave address+write, register address, repeated start, slave address+read, read charcter, ACK...read last character, NACK
I'll send a test pattern to the chip with 16 characters, only the first character is correct the rest are "0". RXLVL shows 16 before read.
Does anybody know what needs to be setup/considered for this operation with the NXP chip?
Related
I am experience some trouble decoding the output of a 1D Chinese Barcode Reader. The reader uses a USB interface and connects as a Keyboard HID device (which I have no problem with). After interfacing the device with Labview and generating the inf driver file I tried reading device interrupt data from a test barcode in the configuration manual "000200" the output of the Device is sent serially and is as follows "39 39 39 31 39 39 40".
I am guessing that 40 is the escape character the 39 is 0 and the 31 is 2.
After doing some research I could not find the relevant key code table for this encoding. I have tried disabling all other encoding formats using the configuration manual (39, full ascii, int 2 to 5..).
The module was able to read Upper case letter and send an additional character noting that it is an Upper Case
The device stopped reading the barcode after disabling the Code 128. I re-enabled this option and reading was successful. however the code 128 table have the "G" assigned to the 39 output and not the 0 which messes up the reading.
Did anyone work with the following format? if so which key code is it? or should I map the character set manually?
The following is a link to the purchased Module:
Reader
Thank you it is much appreciated!
As per this answer, a USB HID device sends USB usage codes, not ASCII character codes. That answer links to the lengthy official documentation on usb.org, but this document from microsoft.com appears to be a concise summary. If those links break in future, a web search for usb hid key codes or similar should find an equivalent.
Looking at the HID Usage ID column on the Microsoft document, the code for '0' is 27 in hexadecimal, which is 39 in decimal. '2' is 1F which is 31, and 40 decimal is 28 hex which corresponds to Return. That would be consistent with the output you're seeing, assuming you're reporting it as a sequence of decimal values. As you've observed, a capital letter is sent as two codes, the first of which will probably correspond to the 'shift' key in the HID usage table.
You could try searching or asking around for a LabVIEW VI to translate these codes into ASCII characters but it's probably quicker to build your own based on the table linked above. To test it, you could use a barcode generator program or webpage to create barcodes for all the characters you want to be able to decode and check that scanning them with your device gives the correct output.
I am new to embedded programming. I am using a compiler to convert source code into hex and I will burn into microcontroller. My question is: microntroller (all ICs) will support binary numbers only (0 & 1). Then how it is working with hex file?
the software that loads the program/data into the flash reads whatever format it support which may be intel hex, motorola srecord, elf, coff, or a raw binary or other. and then do the right thing to program the flash with just the relevant ones and zeros.
First of all, the PC you are using right now has a processor inside, which works just like any other microcontroller. You are using it to browse the internet, although it's all "1s and 0s on the inside". And I am presuming your actual firmware doesn't come even close to running what your PC is running at this moment.
microntroller will support binary numbers only (0 & 1)
Your idea that "microntroller only supports binary numbers (0 & 1)" is a misconception. At it's very low level, yes, microcontroller contains a bunch of transistors, and each of them can store only two states of information (a bit).
But the reason for this is simply because this is a practical way to physically store one small chunk of data.
If you check the assembly instruction manual for your uC architecture, you will see a large number of instructions operating on different data widths (bits grouped into 8, 16 or larger chunks). If your controller is, say, 16-bit, then this will the basic word size for most instructions, and the one that will be the most efficient. When programming in C, this will also be the size of the "special" int type which all smaller integral types get expanded to.
In other words, bits are just building blocks of your hardware, and most of the time shouldn't even concern you at the firmware level, let alone higher application levels. Compare it to a human life form: human body is made of cells, but is also capable of doing more than a single-cell organism, isn't it?
i am using compiler to convert source code into hex
Actually, you are using the compiler to create the machine code for your particular microcontroller architecture. "Hex", or more precisely Intel Hex file format, is just one of several file formats used for storing the machine code into a file, and it's by convenience a plain-text ASCII file which you can easily open in Notepad.
To clarify, let's say you wrote a simple line of C code like this:
a = b + c;
Your compiler needs to know which architecture you are targeting, in order to convert this to machine code. For a fictional uC architecture, this will first get compiled to the following fictional assembly language:
// compiler decides that a,b,c will be stored at addresses 0x1000, 1004, 1008
mov ax, (0x1004) // move value from address 0x1004 to accumulator
add ax, (0x1008) // add value from address 0x1008 to accumulator
mov (0x1000), ax // move value from accumulator to address 0x1000
Each of these instructions has its own instruction opcode, which can be found inside the assembly instruction manual. If the instruction operates on one or more parameters, uC will know that the bytes following the instruction are data bytes:
// mov ax, (addr) --> opcode 0x10
// add ax, (addr) --> opcode 0x20
// mov (addr), ax --> opcode 0x30
mov ax, (0x1004) // 0x10 (0x10 0x04)
add ax, (0x1008) // 0x20 (0x10 0x08)
mov (0x1000), ax // 0x30 (0x10 0x00)
Now you've got your machine-code, which, written as hex values, becomes:
10 10 04 20 10 08 30 10 00
And converted to binary becomes:
0001000000010000000010000100000...
To transfer this to your controller, you will use a file format which your flash uploader knows how to read, which is what Intel Hex is most commonly used for.
Once transferred to your microcontroller, it will be stored as a bunch of bits in its flash memory, but the controller is designed to read these bits in chunks of 8 or more bits, and evaluate them as instruction opcodes or data, depending on the context. For the example above, it will read first 8 bits, and seeing that it's an instruction opcode 0x10 (which takes an additional address parameter), it will read the next two bytes to form the address 0x1004. It will then execute the instruction and advance the instruction pointer.
Hex, Decimal, Binary, they are all just ways of representing a number.
AA in hex is the same as 170 in decimal and 10101010 in binary (and 252 or Octal).
The reason the hex representation is used is because it is very convenient when working with microcontrollers as one hex character fits into 1 nibble. Hence F is 1111, FF is 1111 1111 and so fourth.
Why do messages going through websockets always start with \x00 and end with \xff, as in \x00Your message\xff?
This documentation might help...
Excerpt from section 1.2:-
Data is sent in the form of UTF-8 text. Each frame of data starts
with a 0x00 byte and ends with a 0xFF byte, with the UTF-8 text in
between.
The WebSocket protocol uses this framing so that specifications that
use the WebSocket protocol can expose such connections using an
event-based mechanism instead of requiring users of those
specifications to implement buffering and piecing together of
messages manually.
To close the connection cleanly, a frame consisting of just a 0xFF
byte followed by a 0x00 byte is sent from one peer to ask that the
other peer close the connection.
The protocol is designed to support other frame types in future.
Instead of the 0x00 and 0xFF bytes, other bytes might in future be
defined. Frames denoted by bytes that do not have the high bit set
(0x00 to 0x7F) are treated as a stream of bytes terminated by 0xFF.
Frames denoted by bytes that have the high bit set (0x80 to 0xFF)
have a leading length indicator, which is encoded as a series of
7-bit bytes stored in octets with the 8th bit being set for all but
the last byte. The remainder of the frame is then as much data as
was specified. (The closing handshake contains no data and therefore
has a length byte of 0x00.)
The working spec has changed and no longer uses 0x00 and 0xFF as start and end bytes
http://tools.ietf.org/id/draft-ietf-hybi-thewebsocketprotocol-04.html
I am not 100% sure about this but my guess would be to signify the start and end of the message. Since x00 is a single byte representation of 0 and xFF is a single byte representation of 255
I am trying to create NMEA-compatible proprietary sentences, which may contain arbitrary strings.
The usual format for an NMEA sentence with checksum is:
$GPxxx,val1,val2,...,valn*ck<cr><lf>
where * marks the start of a 2-digit checksum.
My question is: Can any of the value fields contain a * character themselves?
It would seem possible for a parser to wait for the final <cr><lf>, then to look back at the previous 3 characters to find the checksum if present (rather than just waiting for the first * in the sentence). However I don't know if the standard allows it.
Are there other characters which may cause problems?
The two ASCII characters to be careful with are $, which has to be at the start, and * which precedes the checksum. Anyone else parsing your custom NMEA wouldn't expect to find either of those characters anywhere else. Some parsers, when they hit a $ assume that a new line has started. With serial port communication sometimes characters get lost in transit, and that's why there's a $ start of sentence marker.
If you're going to make your own NMEA commands it is customary to start them with P followed by a 3 character code indicating the manufacturer or company creating the proprietary message, so you could use $PSQU. Note that although it is recommended that NMEA commands are 5 characters long, there are proprietary messages out there by various hardware and software manufacturers that are anywhere from 4 characters to 7 characters long.
Obviously if you're writing your own parser you can do what you like.
This website is rather useful:
http://www.gpsinformation.org/dale/nmea.htm
If you're extending the protocol yourself (based on "proprietary") - then sure, you can put in anything you like. I would stick to ASCII, but go wild within those bounds. (Obviously, you need to come up with your own $GPxxx so as not to clash with existing messages. Perhaps a new header $SQUEL, ...)
By definition, a proprietary message will not be NMEA-compatible.
A standard parser listening to an NMEA stream should ignore anything that doesn't match what it thinks is 'good' data. That means a checksum error, or any massively corrupted message like it would think your new message is with some random *s thrown in.
If you are merely writing an existing message, then a * doesn't make sense, and should be ignored, but you run the risk of major issues if the checksum is correct, and the parser doesn't understand the payload.
So I'm making a sketch that takes a two digit number from the usb port, checks the state of the pin that matches the number, then toggles the pin on/off.
Take a peek at the source
For some reason, when I send 13 through the Arduino serial monitor, I get this message back:
Pin number is greater than 14, details:
490
51
541
Meaning that the IDE is sending weird numbers, or the Arduino is processing them wrong. Do any of you see a problem as to why this isn't working right?
If you enter the ASCII characters "1" then "3" then Serial.read() will return 49 and 51. This is because in the ASCII character table "1" and "3" are represented by the numbers 49 and 51, respectively. If you want to find the number that the user typed out you have to convert it from ASCII.
I'm not very familiar with the Arduino language, but assuming it's similar to C you can find the changes needed Here.
I rewrote the program in another way, which may be clearer to Read.
The '0' used in the source is simply another way of saying "the number used to represent the character '0'", so is 48. In C-like languages '0' == 48, '1' == 49, etc, etc.