Serial sending weird data - embedded

So I'm making a sketch that takes a two digit number from the usb port, checks the state of the pin that matches the number, then toggles the pin on/off.
Take a peek at the source
For some reason, when I send 13 through the Arduino serial monitor, I get this message back:
Pin number is greater than 14, details:
490
51
541
Meaning that the IDE is sending weird numbers, or the Arduino is processing them wrong. Do any of you see a problem as to why this isn't working right?

If you enter the ASCII characters "1" then "3" then Serial.read() will return 49 and 51. This is because in the ASCII character table "1" and "3" are represented by the numbers 49 and 51, respectively. If you want to find the number that the user typed out you have to convert it from ASCII.
I'm not very familiar with the Arduino language, but assuming it's similar to C you can find the changes needed Here.
I rewrote the program in another way, which may be clearer to Read.
The '0' used in the source is simply another way of saying "the number used to represent the character '0'", so is 48. In C-like languages '0' == 48, '1' == 49, etc, etc.

Related

pyUSB returns invalid serial number (debian linux)

python 3.5.3, pyusb 1.1, backend_identifier='pyusb', dev.iSerialNumber==3 as expected.
usb_util.get_getstring(dev,256, dev.iSerialNumber)
returns
array('B',[4,3,9,4])
Which is a single UTF character, not the (known) serial number of the device (as shown and used elsewhere). (My terminal session actually displays character 249 (0xF9), but I think that's unrelated)
The serial number actually starts '000E7072 .. (it's a QL7090 printer) and FWIW, is only about 20 characters long, which should fit into 256 however you cut it.
As far as I know, I am using the default USB locale. Everything else is using the string I can see associated with 0x0409 locale, but my knowledge of pyusb and usb string locales is zero. I have no idea if that is relevant.
My knowledge is wide but not deep. Does anybody know what's gone wrong here?

Barcode Scanner Decoding

I am experience some trouble decoding the output of a 1D Chinese Barcode Reader. The reader uses a USB interface and connects as a Keyboard HID device (which I have no problem with). After interfacing the device with Labview and generating the inf driver file I tried reading device interrupt data from a test barcode in the configuration manual "000200" the output of the Device is sent serially and is as follows "39 39 39 31 39 39 40".
I am guessing that 40 is the escape character the 39 is 0 and the 31 is 2.
After doing some research I could not find the relevant key code table for this encoding. I have tried disabling all other encoding formats using the configuration manual (39, full ascii, int 2 to 5..).
The module was able to read Upper case letter and send an additional character noting that it is an Upper Case
The device stopped reading the barcode after disabling the Code 128. I re-enabled this option and reading was successful. however the code 128 table have the "G" assigned to the 39 output and not the 0 which messes up the reading.
Did anyone work with the following format? if so which key code is it? or should I map the character set manually?
The following is a link to the purchased Module:
Reader
Thank you it is much appreciated!
As per this answer, a USB HID device sends USB usage codes, not ASCII character codes. That answer links to the lengthy official documentation on usb.org, but this document from microsoft.com appears to be a concise summary. If those links break in future, a web search for usb hid key codes or similar should find an equivalent.
Looking at the HID Usage ID column on the Microsoft document, the code for '0' is 27 in hexadecimal, which is 39 in decimal. '2' is 1F which is 31, and 40 decimal is 28 hex which corresponds to Return. That would be consistent with the output you're seeing, assuming you're reporting it as a sequence of decimal values. As you've observed, a capital letter is sent as two codes, the first of which will probably correspond to the 'shift' key in the HID usage table.
You could try searching or asking around for a LabVIEW VI to translate these codes into ASCII characters but it's probably quicker to build your own based on the table linked above. To test it, you could use a barcode generator program or webpage to create barcodes for all the characters you want to be able to decode and check that scanning them with your device gives the correct output.

Read input in NASM, and store it whole into a variable

what is the method by which I can read the input of the user, say the input is "500"
then store this number in a variable?
The only method I know would be to store them character by character with possibly the need of register offsets.
Is there any other way, preferably storing the number directly?
i.e. something like:
mov var1, inbuffer
Details on environment:
32 bit Assembly w/ DGJPP
Thank you.
Ahhh... DJGPP, that'd be dos I guess. Look into int 21h/0Ah (0Ah in ah). Or you might be better off with the readfile subfunction (3Fh ???) on stdin. Look it up in Ralf Brown's Interrupt list.
In any case, what you're going to get is the characters '5', '0', and '0' - 35h, 30h, 30h. It will take some processing to get the number 500 out of this. If you're reading numbers from left to right, zero up a register to use as "result so far". Read a character from your input buffer. If it's a valid decimal digit, subtract '0' to convert character to number, multiply "result so far" by ten, and add in your new number. Repeat until you run out of characters.

Finding the ascii value for shift-in shift-out in sql?

I've some records which are really messed up.
My team lead told me to find out the position of characters with ascii value 14 and 15.
I've a query
SELECT CHARINDEX(CHAR(14),X_CUSTOMER_COMMENTS)
FROM vp_service_requests;
SELECT CHARINDEX(CHAR(15),X_CUSTOMER_COMMENTS)
FROM vp_service_requests;
which returns 0 because i wasn't able to find char with 14 and 15 ascii value after google
search i found 14 and 15 ascii value are for shift in and shift out
how this represents on keyboard so i can try for it with CHAR(14) function.
As a holdover from the old DOS days, Windows still allows you to enter certain old ASCII keys from the keyboard by pressing and holding the ALT key, followed by the three-digit code you wish to enter (from the 10-key pad, not the numeric row atop the keyboard), eg for 14, type ALT-014.
However, some of the lower-level codes are inherited from old terminal functions, eg ASCII 7 is a bell, 8 is a backspace, eg, and rather that typing a character, they cause the cursor to behave a certain way or induce an application to respond in a defined manner. You can embed a CHAR(XX) value for testing simply by concatenating the value into a string and INSERTing it into your test table.
It should be Ctrl-N and Ctrl-O although I doubt this will help.
Try loading the records into a good editor and look at them in HEX. Weird characters should stick out like a sore thumb

Can NMEA values contain '*' (asterisks)?

I am trying to create NMEA-compatible proprietary sentences, which may contain arbitrary strings.
The usual format for an NMEA sentence with checksum is:
$GPxxx,val1,val2,...,valn*ck<cr><lf>
where * marks the start of a 2-digit checksum.
My question is: Can any of the value fields contain a * character themselves?
It would seem possible for a parser to wait for the final <cr><lf>, then to look back at the previous 3 characters to find the checksum if present (rather than just waiting for the first * in the sentence). However I don't know if the standard allows it.
Are there other characters which may cause problems?
The two ASCII characters to be careful with are $, which has to be at the start, and * which precedes the checksum. Anyone else parsing your custom NMEA wouldn't expect to find either of those characters anywhere else. Some parsers, when they hit a $ assume that a new line has started. With serial port communication sometimes characters get lost in transit, and that's why there's a $ start of sentence marker.
If you're going to make your own NMEA commands it is customary to start them with P followed by a 3 character code indicating the manufacturer or company creating the proprietary message, so you could use $PSQU. Note that although it is recommended that NMEA commands are 5 characters long, there are proprietary messages out there by various hardware and software manufacturers that are anywhere from 4 characters to 7 characters long.
Obviously if you're writing your own parser you can do what you like.
This website is rather useful:
http://www.gpsinformation.org/dale/nmea.htm
If you're extending the protocol yourself (based on "proprietary") - then sure, you can put in anything you like. I would stick to ASCII, but go wild within those bounds. (Obviously, you need to come up with your own $GPxxx so as not to clash with existing messages. Perhaps a new header $SQUEL, ...)
By definition, a proprietary message will not be NMEA-compatible.
A standard parser listening to an NMEA stream should ignore anything that doesn't match what it thinks is 'good' data. That means a checksum error, or any massively corrupted message like it would think your new message is with some random *s thrown in.
If you are merely writing an existing message, then a * doesn't make sense, and should be ignored, but you run the risk of major issues if the checksum is correct, and the parser doesn't understand the payload.