A little background: I am writing a Visual Basic application. It will connect to an Omron Programmable Logic Controller (PLC).
When I read data from the PLC, they come as WORDs (16bits). However the PLC programmer needs to have a double-word (32bits) for a big number (bigger than what 16bits can hold). I need to be able to show that number on the screen. As mentioned, I read from the PLC as WORDs. So I can make 2 reads to get the 2 words. However they are separate.
Here's what it is: I need to show 120,000 on the screen (VB app). on the PLC, I read 2 words (in bit form):
Word#1: 1101 0100 1100 0000 (Unsigned, this equals 54464)
Word#2: 0000 0000 0000 0001
I need to put these together like this: 0000 0000 0000 0001 1101 0100 1100 0000 = 120,000
Are there any built in functions in Visual Basic that will combine these two 16bit words into into 1 Double-word? If so what is it?
Or do I have to write a function to put these two values together?
Or has someone done something like this and can provide some info for me?
I found the << and >> operands. They shift bits left and right. so I used the formula (Word2 << 16) to generate the value of Word2. then added the value of Word1.
In VBA if you want to concatenate two strings together all you need to do is use the & operator. For example:
Dim TempCombinedWord as String
TempCombinedWord = FirstWord & LastWord
If FirstWord contained Happy and LastWord contained TreeFriends. Then TempCombinedWord would then contain HappyTreeFriends.
Related
I need to read a Cobol file into VB.net. Here is the description of the data types from the documentation:
All Magnetic tape files are recorded in 9-track, 8OOBPI mode with odd parity. They are created IBM equipment disk operating system. IBM System 360 Standard.
Binary - Data is coded in pure binary code.
BCD - Data is coded in binary coded decimal format. (Primarily
for files created by the IBM 1401 System).
EBCDIC - Data is coded in extended binary coded decimal interchange code. :(An IBM developed code.)
Packed - Data is coded in packed decimal format.
File Format:
1-2 Record Count [Numeric] (Binary)
3-4 Filler (Binary)
5-5 Record Type [B or R] (EBCDIC)
6-10 Sales Location Numeric [9 digit number] (Packed)
11-13 Sales Identifier (3 character Alpha) (EBCDIC]
etc
So, I know I should read the entire file into a byte array and that's about the limit of what I know to do...
A) I saw another post on EBCDIC conversation using
System.Text.Encoding.GetEncoding(37)
but it is for an entire file. If I run the whole file through it I see intelligible text, but of course the other fields are junk. I don't know the language to decode a single field properly.
B) I have no idea what to do with PURE Binary format.
C) I don't know how to read Packed, particularly as a single field
I've tried a variety of decoding options for PURE BINARY, but the number I get for the first field is not consistent with the stated length of the rows in the docs.
Packed decimal format:
For s9(5)V9(4) comp-3, 123.45 is represented in byte format as
00 12 34 50 0c
Each digit is represented by 4 bits, there is a 4 bit sign (c) at the end and an assumed decimal after the 3.
Most languages provide a routine for converting byte/bytes into a string i.e. byte x'34' -->> String '34'. So you can:
Convert the bytes to a String representation
Add the decimal point in
Strip off the sign character from the end and add the appropriate sign to the front
There are other ways:
Create an translation array and do an array lookup. (See https://github.com/bmTas/JRecord/blob/master/Source/JRecord_Project/JRecord_Common/src/main/java/net/sf/JRecord/Types/smallBin/TypePackedDecimal9.java for an example)
Process it 4 bits at a time
Other fields
The first field (binary) might be a big endian binary integer or another packed-decimal. There is probably a utility built in the .net to do this.
Convert the character fields from ebcdic to ascii one field at a time
In VBA you did not need to read the whole file in, you could read it record by record. I would presume you can do the same in vb.net
Useful Utilities
These tools might be useful for testing.
The RecordEditor should be able to display the file. The Layout Wizard should be able determine the format of the file. Alternatively use the Cobol copybook below
The Java program CobolToCsv should be able to convert the file to Csv
01 tape-record.
05 record-count pic s9(3) comp.
05 filler pic x(2).
05 record-type pic x.
05 Sales-Location pic s9(9) comp-3.
05 Sales-Identifier pic x(3).
I was working in a project and I canĀ“t use the bitwise negation with a U32 bits (Unsigned 32 bits) because when I tried to use the negation operator for example I have 1 and the negation (according to this function) was the biggest number possible with U32 and I expected the zero. My idea was working with a binary number like (110010) and I need to only negate the bits after the first 1-bit(001101). There is a way to do that in LabVIEW?
This computes the value you are looking for.
1110 --> 0001 (aka, 1)
1010 --> 0101 (aka, 101)
111 --> 000 (aka, 0) [indeed, all patterns that are all "1" will become 0]
0 --> 0 [because there are no bits to negate... maybe you want to special case this as "1"?)
Note: This is a VI Snippet. Save the .png file to your disk then drag the image from your OS into LabVIEW and it will generate the block diagram (I wrote it in LV 2016, so it works for 2016 or later). Sometimes dragging directly from browser to diagram works, but most browsers seem to strip out the EXIF data that makes that work.
Here's an alternative solution without a loop. It formats the input into its string representation (without leading zeros) to figure out how many bits to negate - call this n - and then XOR's the input with 2^n - 1.
Note that this version will return an output of 1 for an input of 0.
Using the string functions feels a bit hacky... but it doesn't use a loop!!
Obviously we could instead try to get the 'bit length' of the input using its base-2 log, but I haven't sat down and worked out how to correctly ensure that there are no rounding issues when the input has only its most significant bit set, and therefore the base-2 log should be exactly an integer, but might come out a fraction smaller.
Here's a solution without strings/loops using the conversion to float (a common method of computing floor(log_2(x))). This won't work on unsigned types.
I need to use bit operations in VBA. For example I want to set first bit in 0 second bit in 1 and so on. How can I do it?
Thanks a lot!
You could use masks.
If you want to set* the n-th bit, you can perform an Or operation of your original value and a mask filled with 0 except for the n-th position.
That way, 1100 Or 0001 will set the first bit resulting in 1101. 1100 Or 0010 will set the second bit, resulting in 1110.
If you want to unset* the n-th position, you can do the opposite. Perform an And operation on your original value with a mask filled with 1 except the n-th position.
That way, 0011 And 1110 will unset the first bit resulting in 0010, 0011 And 1101 will unset the second bit, resulting in 0001.
* Set means to turn the bit into 1. Unset means to turn the bit into 0.
Anyone know how Base64 decoding Algorithm, as information in the internet many article, journal, and book explain how to encoding base64 algorithm But the decoding Base64 not explained.So my question is how to decode Base4 algorithm?
Thank you,
Hope Your Answer
Basically you take one character at the time and convert it to the bits that it represents. So if you find an A character it would translate into 000000 and the / character translates into 111111. Then you concatenate the bits. So you get 000000 | 111111. This however won't fit into a byte, you have to split up and shift the result to get 00000011 and 1111xxxx where xxxx is not known yet
Of course, you may only be able to do this using bytes in a high performance implementation, so you have two spurious bits for each character (separated by a space from the bits that actually mean something).
((00 000000 << 2) & 11111100) | ((00 111111 >> 4) & 00000011) -> 00000011
((00 111111 << 4) & 11110000) | ???????? -> 1111xxxx
...
First with the shift operator << you put the bits in place. Then with the binary AND operator & you single out those bits you want and then you use the binary OR | operator you assemble the bits of the two characters.
Now after 4 characters you will have 3 full bytes. It may however be that your result is not a multiple of three. In that case you have either two or three characters possibly followed by padding (=) at the end. One character is not possible as that would suggest an incomplete byte with only the highest bits set. In that case you should simply ignore the last spurious bits encoded by the last character.
Personally I like to use a state machine to do the decoding. I've already created a couple of base 64 streams that use a state machine in Java. It may be useful to only decode once you have 4 characters (3 full bytes) until you are at the end of the base 64 encoding.
I just learned about the udp checksum calculation.
But I am confused does the algorithm detect all errors.
Of course not. No checksum can detect all errors.
The UDP checksum cannot detect all errors, but it does detect many. It will detect any single bit flip, but if the packet is altered such that the sum of all the data as 16 bit values remains constant, the checksum will not detect the error.
Usually checksums can detect only the most common errors, but not all of them. Actually, the UDP checksum is optional in combination with IPv4 as UDP is designed as an unreliable service.
No, It can not detect all errors.
Suppose we have two 16-bit numbers
A = 0101 1001 1010 0010 and
B = 1010 0100 0110 0101 then Their sum is :
S = 1111 1110 0000 0111 and 1's(Ones) Complement of this checksum is
C = 0000 0001 1111 1000
and this checksum will be filled in the checksum field of UDP Segment.
So Sender will send these 3 16-bit numbers and at receiver sum of A and B will again be calculated at receiver and added to UDP Checksum received .
If we have Sum 1111 1111 1111 1111 then we will think there is no error in recieved segment.
But here is the catch that if
Last two bits in A i.e 10 Get flipped to 01 and similarly
Last two bits in B i.e 01 Get flipped to 10 . Then again S will remain same and S + C will be again 1111 1111 1111 1111. Which is definately a problem as this will not catch error if More than one bit gets flipped.
So, UDP Checksum cannot detect errors of all kinds.