I know that in turing machines, the (different) tapes are used for both input and output and for stack too. In a problem of adding 2 numbers using turing machine, the input is dealing with many symbols like 1,0,B(blank),+.
(Tough this questions is related to physics, I asked here since I thought they mayn't know about turing machines and their inputs.)
And my doubt is ,
If the input is BBBBB1111+111111BB,
then in magnetic tape,
1->represented by North polarity(say).
0->represented by south polarity(say).
B->represented by No polarity.
Then,
How '+' will be represented?
I doesn't think that there will be some codes(like ASCII) for special symbols.
Since the number and type of special symbols will be implementation dependent. Also special codes will make the algorithm more tedious.
or
Is the input symbol representation in tapes is entirely different from the above mentioned method?If yes, please explain.
You would probably do this by having each character encoded with multiple bits. For example:
B: 00
0: 01
1: 10
+: 11
Your read head would then have size two and would always move two steps to the left or the right when making a move.
Symbol: Representation
0:1 ; 1:11 ; 2:111 ; n:n+1 ; Blank:B
Related
I was working in a project and I can´t use the bitwise negation with a U32 bits (Unsigned 32 bits) because when I tried to use the negation operator for example I have 1 and the negation (according to this function) was the biggest number possible with U32 and I expected the zero. My idea was working with a binary number like (110010) and I need to only negate the bits after the first 1-bit(001101). There is a way to do that in LabVIEW?
This computes the value you are looking for.
1110 --> 0001 (aka, 1)
1010 --> 0101 (aka, 101)
111 --> 000 (aka, 0) [indeed, all patterns that are all "1" will become 0]
0 --> 0 [because there are no bits to negate... maybe you want to special case this as "1"?)
Note: This is a VI Snippet. Save the .png file to your disk then drag the image from your OS into LabVIEW and it will generate the block diagram (I wrote it in LV 2016, so it works for 2016 or later). Sometimes dragging directly from browser to diagram works, but most browsers seem to strip out the EXIF data that makes that work.
Here's an alternative solution without a loop. It formats the input into its string representation (without leading zeros) to figure out how many bits to negate - call this n - and then XOR's the input with 2^n - 1.
Note that this version will return an output of 1 for an input of 0.
Using the string functions feels a bit hacky... but it doesn't use a loop!!
Obviously we could instead try to get the 'bit length' of the input using its base-2 log, but I haven't sat down and worked out how to correctly ensure that there are no rounding issues when the input has only its most significant bit set, and therefore the base-2 log should be exactly an integer, but might come out a fraction smaller.
Here's a solution without strings/loops using the conversion to float (a common method of computing floor(log_2(x))). This won't work on unsigned types.
I am experience some trouble decoding the output of a 1D Chinese Barcode Reader. The reader uses a USB interface and connects as a Keyboard HID device (which I have no problem with). After interfacing the device with Labview and generating the inf driver file I tried reading device interrupt data from a test barcode in the configuration manual "000200" the output of the Device is sent serially and is as follows "39 39 39 31 39 39 40".
I am guessing that 40 is the escape character the 39 is 0 and the 31 is 2.
After doing some research I could not find the relevant key code table for this encoding. I have tried disabling all other encoding formats using the configuration manual (39, full ascii, int 2 to 5..).
The module was able to read Upper case letter and send an additional character noting that it is an Upper Case
The device stopped reading the barcode after disabling the Code 128. I re-enabled this option and reading was successful. however the code 128 table have the "G" assigned to the 39 output and not the 0 which messes up the reading.
Did anyone work with the following format? if so which key code is it? or should I map the character set manually?
The following is a link to the purchased Module:
Reader
Thank you it is much appreciated!
As per this answer, a USB HID device sends USB usage codes, not ASCII character codes. That answer links to the lengthy official documentation on usb.org, but this document from microsoft.com appears to be a concise summary. If those links break in future, a web search for usb hid key codes or similar should find an equivalent.
Looking at the HID Usage ID column on the Microsoft document, the code for '0' is 27 in hexadecimal, which is 39 in decimal. '2' is 1F which is 31, and 40 decimal is 28 hex which corresponds to Return. That would be consistent with the output you're seeing, assuming you're reporting it as a sequence of decimal values. As you've observed, a capital letter is sent as two codes, the first of which will probably correspond to the 'shift' key in the HID usage table.
You could try searching or asking around for a LabVIEW VI to translate these codes into ASCII characters but it's probably quicker to build your own based on the table linked above. To test it, you could use a barcode generator program or webpage to create barcodes for all the characters you want to be able to decode and check that scanning them with your device gives the correct output.
The Hyphen library seems to be a very popular and free way to have hyphenation in your app.
What does hyphenation vector mean?
I am running the example attached to the library source code.
Example output:
hibernate // input word
030412000 // output hyphenation vector
hi=ber=nate // hyphen points
- hi=bernate
- hiber=nate
Odd numbers in the vector indicate hyphenation points. But what do all of those values mean?
László Németh describes the algorithm in OpenOffice's documentation in full detail.
The library uses the algorithm developed by Frank M. Liang ("Word Hy-phen-a-tion by Com-pu-ter"): all letters in digrams, trigrams, and longer patterns are assigned numerical values to indicate it's a 'usual' place (an odd number) or an 'unusual' place (an even number) for a hyphen to occur. The higher the number is, the greater importance -- a pattern will almost never be broken on a larger even number, and almost always on a larger odd number. The number sequences are statistically determined on a corpus of pre-hyphenated words.
Note that the numbers are for positions between two characters. A better notation would have been
h i b e r n a t e
0 3 0 4 1 2 0 0 (0)
(where the last 0 is obsolete).
does fortran have a maximum 'string' length?
i am going to be reading lines from a file which could have very long lines. the one i am looking at now has around 1.3k characters per line, but it is possible that they may have much more. i am reading each line from the file to a character*5000 variable, but if i get more in the future, is it bad to make it a character*5000000 variable? is there a max? is there a better way to solve this problem than making a very large character variables?
Since the usual Fortran IO is record based, reading lines into strings implies knowing the maximum string length. Another possible design: use stream IO and Fortran will ignore the record boundaries. Read the file in fixed-length chunks that are shorter than the longest lines. The complication is handling items split across chunk boundaries. The practicality depends on details not given in the question.
P.S. From "The Fortran 2003 Handbook" by Adams et al.: "The maximum length permitted for character strings is processor-dependent." -- meaning compiler dependent.
Maximum wil be implementation dependant. For your case, I can think of something along these lines:
character(:),allocatable :: ch
l = 5
do
allocate(character(l) :: ch)
read(unit,'(a)',iostat=io) ch
if (ch(l-4:l) = ' ' .or. io/=0) exit
deallocate(ch)
l = l * 2
end do
Obviously will not work for pad='no' and if you expect long regions with spacec in your records.
I am trying to create NMEA-compatible proprietary sentences, which may contain arbitrary strings.
The usual format for an NMEA sentence with checksum is:
$GPxxx,val1,val2,...,valn*ck<cr><lf>
where * marks the start of a 2-digit checksum.
My question is: Can any of the value fields contain a * character themselves?
It would seem possible for a parser to wait for the final <cr><lf>, then to look back at the previous 3 characters to find the checksum if present (rather than just waiting for the first * in the sentence). However I don't know if the standard allows it.
Are there other characters which may cause problems?
The two ASCII characters to be careful with are $, which has to be at the start, and * which precedes the checksum. Anyone else parsing your custom NMEA wouldn't expect to find either of those characters anywhere else. Some parsers, when they hit a $ assume that a new line has started. With serial port communication sometimes characters get lost in transit, and that's why there's a $ start of sentence marker.
If you're going to make your own NMEA commands it is customary to start them with P followed by a 3 character code indicating the manufacturer or company creating the proprietary message, so you could use $PSQU. Note that although it is recommended that NMEA commands are 5 characters long, there are proprietary messages out there by various hardware and software manufacturers that are anywhere from 4 characters to 7 characters long.
Obviously if you're writing your own parser you can do what you like.
This website is rather useful:
http://www.gpsinformation.org/dale/nmea.htm
If you're extending the protocol yourself (based on "proprietary") - then sure, you can put in anything you like. I would stick to ASCII, but go wild within those bounds. (Obviously, you need to come up with your own $GPxxx so as not to clash with existing messages. Perhaps a new header $SQUEL, ...)
By definition, a proprietary message will not be NMEA-compatible.
A standard parser listening to an NMEA stream should ignore anything that doesn't match what it thinks is 'good' data. That means a checksum error, or any massively corrupted message like it would think your new message is with some random *s thrown in.
If you are merely writing an existing message, then a * doesn't make sense, and should be ignored, but you run the risk of major issues if the checksum is correct, and the parser doesn't understand the payload.