I have a 16 bit word that could be anywhere from 1 to 16 data values. They are decoded by knowing the MSB and lsb of the 16 bit word and grabbing those bits.
I'm using VB and I just don't know how to do this.
Example
I have a word that is
&HA6F2
1010 0100 1111 0010
I know my data is LSB 3 to MSB 9. Bit ordering is left to right
So the data is 010011
How do I get this in VB code? I want to work in bytes because after I get the packed bits then I have to do type casts on it (signed_fixed, integer, 2's complement, etc)
Thanks
You should use mask (bitwise AND, see And keyword). And also probably bitwise-right-shift (see >> operator)
Conceptually:
1010 0100 1111 0010 '= the data
0001 1111 1100 0000 '= 1FC0 the mask
-------------------- And
0000 0100 1100 0000 '= 04C0
-------------------- >> 6
0000 0000 0001 0011 '= 0013 now your value is in the right most
In the code
Dim newData As Integer = (rawData And &H1FC0) >> 6
Related
My end goal is to add zeroes in front of my data, so 918 becomes 0918 and 10 becomes 0010 limited at 4 characters. My solution so far is to use SUBSTR like i do below:
PROC SQL;
CREATE TABLE WORK.QUERY_FOR_DAGLIGEKORREKTION_0000 AS
SELECT (SUBSTR(line_item, 1, 4)) AS line_item,
(SUBSTR(column_item, 1, 4)) AS column_item
FROM QUERY_FOR_DAGLIGEKORREKTIONER t1;
QUIT;
But when I run my query I get the following error:
ERROR: Function SUBSTR requires a character expression as argument 1.
ERROR: Function SUBSTR requires a character expression as argument 1.
This is my data set:
line_item column_item
918 10
230 10
260 10
918 10
918 10
918 10
70 10
80 10
110 10
250 10
35 10
What am I doing wrong? and is there another maybe easier way to add zeroes in fornt of my data?
I hope you can lead me in the right direction.
In SAS you can associate a format with a numeric variable to specify how the value is rendered when output in a report or displayed in a query result.
Example:
Specify a column to be displayed using the Z<n>. format.
select <numeric-var> format=z4.
The underlying column is still numeric.
If you want to convert the numeric result permanently to a character type, use the PUT function.
select PUT(<numeric-expression>, Z4.) as <column-name>
I found a solution by searching for something similar to the Oracle solution by #d r and I found the following solution to the problem:
put(line_item, z4.) AS PAD_line_item,
put(column_item, z4.) AS PAD_column_item,
resulting in:
line_item column_item
0918 0010
0230 0010
0260 0010
0918 0010
0918 0010
0918 0010
0070 0010
0080 0010
0110 0010
0250 0010
0035 0010
I hope this will help someone in the future with leading zeroes.
Oracle
Select
LPAD(1, 4, '0') "A",
LPAD(12, 4, '0') "B",
LPAD(123, 4, '0') "C",
LPAD(1234, 4, '0') "D",
LPAD(12345, 4, '0') "E"
From Dual
--
-- R e s u l t
--
-- A B C D E
-- ---- ---- ---- ---- ----
-- 0001 0012 0123 1234 1234
Add the value to 10,000; Cast the result to a VARCHAR(5) (or longer); Get SUBSTR(2,4) out of that.
SELECT
SUBSTR((line_item + 10000)::VARCHAR(5),2,4) AS s_line_item
, SUBSTR((column_item + 10000)::VARCHAR(5),2,4) AS s_column_item
FROM indata;
-- out s_line_item | s_column_item
-- out -------------+---------------
-- out 0918 | 0010
-- out 0230 | 0010
-- out 0260 | 0010
-- out 0918 | 0010
-- out 0918 | 0010
-- out 0918 | 0010
-- out 0070 | 0010
-- out 0080 | 0010
-- out 0110 | 0010
-- out 0250 | 0010
-- out 0035 | 0010
deleted due to unclear question
Why do you need octal at all? fec0ded is obviously hexadecimal and 8 is either hex or decimal (actually, doesn't matter - it's still the same 8)
Calculations are done as follows:
FEC0DED xor 8 (hex)
=
1111 1110 1100 0000 1101 1110 1101 xor 1000 (bin)
=
1111 1110 1100 0000 1101 1110 0101 (bin)
=
FEC0DE5 (hex)
I.e. you flip 4th least significant bit.
I am a bit confused by the Oct function. Oct(-8) does not return -10, it returns 37777777770. I will just write my own function but does anyone know why it gives such a weird result back?
How do computers represent negative numbers in binary anyway?
Throughout the years several ways have been dreamed up for representing negative numbers, but for the sake of keeping this answer on the short side of a dissertation we're only going to look at two's complement.1
To calculate a negative number's two's complement, we use the following steps:
Take the magnitude of the number (aka. it's absolute value)
Complement all the bits (Bitwise Not)
Add 1 to the result (Simple addition)
So what's -8 in two's complement binary? First let's convert the absolute value to binary (For now we'll work in 8 bit for simplicity. I've worked out the same answer below with 32 bit numbers.)
|8| => 0000 1000
The next step is to complement all of the bits in the number
0000 1000 => 1111 0111
Finally we add 1 to the result to get our two's complement representation
1111 0111
+ 1
----------
1111 1000 (Don't forget to carry)
Ok, a brief review of octal numbers. Octal, or base 8, is another way of representing binary numbers in a more compact way. The more observant will notice that 8 is a power of 2 and we can certainly use that fact to our advantage converting our negative number to octal.
Why does this make Oct produce weird results with negative numbers?
The Oct function operates on the binary representation of the number, converting it to it's octal (Base 8) representation. So let's convert our 8 bit number to octal.
1111 1000 => 11 111 000 => 011 111 000 => 370
Note that since 8 = 2^3 it's easy to convert, because all we have to do is break the number up into groups of three bits and convert each group. (Much like how hex can be converted by breaking into groups of 4 bits.)
So how do I get Oct to produce a regular result?
Convert the absolute value of the number to octal using Oct. If the number is less than 0, stick a negative sign in front.
Example using 32 bit numbers
We'll stay with -8 because it's been so good to us this whole time. So converting -8 to two's complement gives:
Convert: 0000 0000 0000 0000 0000 0000 0000 1000
Invert: 1111 1111 1111 1111 1111 1111 1111 0111
Add 1: 1111 1111 1111 1111 1111 1111 1111 1000
Separate: 11 111 111 111 111 111 111 111 111 111 000
Pad: 011 111 111 111 111 111 111 111 111 111 000
Convert: 3 7 7 7 7 7 7 7 7 7 0
Shorten: 37777777770
Which produces the result you're seeing when you call Oct(-8).
Armed with this knowledge, you can now also explain why Hex(-8) produces 0xFFFFFFF8. (And you can see why I used 8 bit numbers throughout most of this.)
1 For a overly detailed introduction to binary numbers, check out the Wikipedia article
I have been working on some reports and recently found a small guide for the system I have been querying against. Part of the syntax it offers up is like the following:
... "WHERE [FIELD] & [VALUE] = [VALUE]"
i.e.: ... "WHERE flags & 16 = 16"
I am just curious as to what this syntax is meaning... I get something like WHERE flags = 16, but not the '&16 = 16' part. Any clarification?
The article referenced: http://rightfaxapi.com/advanced-sql-for-manipulating-faxes/
Thank you,
Wes
The & is doing a bit-wise "and". So, it is "1" only when both bits are 1. The logic overall is checking that all the bits in value are set in field.
As an example, consider that value is:
0000 0101
Then if field is
1111 1111
The & is:
0000 0101
(The 1s are only where both are 1.)
And this is the same as value. So, this passes the condition.
Now if field is:
0001 1110
Then the & is:
0000 0100
And this differs from value, so it does not pass the condition.
& is bitwise AND. In your example, your mask is 16(0x0010, binary 0000 0000 0001 0000 ). The reslt of the AND operation will be either 0 (all bits in the mask are not set) or the mask value (all bit in the mask ARE set). So your where A&16 = 16 expression is testing to see if bit 5 of the integer value, counting from the right, is set.
Examples:
48 & 16 = 16 is TRUE:
48 (binary: 0000 0000 0011 0000)
AND 16 (binary: 0000 0000 0001 0000)
-- -----------------------------
16 (binary: 0000 0000 0001 0000)
33 & 16 = 16 is FALSE:
33 (binary: 0000 0000 0010 0001)
AND 16 (binary: 0000 0000 0001 0000)
-- -----------------------------
0 (binary: 0000 0000 0000 0000)
Easy!
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Does anyone know how I can solve this problem ? any help would be great...... i cant seem to get my head around it.
As you know Binary can only be either 1 or 0
Say you had a 8 digit Binary number like a byte >>>>>> 0001 1000
Im trying to figure out an equation or what would be the maximum amount of combinations you could get from an 8 digit number
What i mean is.... say you had a two digit binary number,the maximum Bnary combinations that you could have are either
00
01
10
11
Therefore total maximum combinations from a 2 digit Binary number = 4
Example 2
If you had a 3 digit number , maximum Binary Combinations would be
000
001
010
100
101
111
110
011
Therefore total maximum Binary combinations from a 3 digit number = 8
Example 3
If it were a 4 digit number, maximum binary combinations that you could have are either
0000
0001
0010
0100
1000
0111
0110
1111
1110
1101
1011
1001 Total maximum combination = 12
I Recently Asked this Question and it was answered thank you manu-fatto and zgnilec they were kind enough to let me know it was a simple equation the answer/equation is 2^digit size.
I guess my next problem is how do i write a small program that can show these combinations in Xcode or NSLog. I'm good with objective C an output I can view like NSLog would be great.
All I know is it would look something like:
int DigitSize=8
int CombinationTotal = 2^8
CombinationSize = NSMutableArray ArraywithCapacity 8;
Output
NSString Combination1 =#"0000 0000";
NSString Combination2 =#"0000 0001";
NSString Combination3 =#"0000 0010";
Nslog #"combination 1 = %# ,Combination1";
Nslog #"combination 2 = %# ,Combination2";
Nslog #"combination 3 = %# ,Combination3";
……
Nslog #"combination 256 = ???? ???? ";
Sorry for the vague language I only started learning programming 3 months ago and I still have a lot of tutorials to go through.
**Im trying to build a data compression algorithm...
basically data compression is about reducing the number of bits ... the lesser the bits the smaller a file is
ie
A file with 700bits is smaller than a file with 900bits
8 bits = 1 byte
1024bytes = 1kb
1024kb = 1 mb
i donno if its even possible but i just thought what if you had an algorithm that could read 1024 bits at a time ...with the equation thats = 2^1024 = math error :( == total number of bit combinations possible
Once you have the total number of combinations you set each combination to a symbol like eg 000101010010101011001011011010101010140010101101000000001110100101100001010100000......0011010 = symbol #
So from now on whenever the computer sees the symbol # it recognises it is equal to the binary number 000101010010101011001011011010101010140010101101000000001110100101100001010100000......0011010
to better understand it ...just think of number plates on a car/vehicle, they are only a few characters but wen you punch them into police database or any car data base more information comes out its the same principle....
basically the symbols are a key to more data
i dont know if it make sense but... in theory if you could read 8388608 bits at a time
8388608 bits = 1megabyte ......
ten symbols could mean 10mb...you could create digital media 2d barcodes
its just a thought i had watching starGate lol :)**
2 to the power of 8, where 8 is number of digits.
Edit- only read first question :)
create function that will display an integer as binary
for (i = 0; i < pow(2,n), i++)
{
displayBits(i);
}
A quick implementation
#import <Foundation/Foundation.h>
int main(int argc, const char * argv[])
{
#autoreleasepool {
NSUInteger length = 8; // number of digits
NSUInteger n = pow(2, length); // number of possible values
for (int i = 0; i < n; i++) {
NSString *repr = #"" ;
for (int j = 0; j < length; ++j) {
if([repr length] % 5 == 0)
repr = [#" " stringByAppendingString:repr]; // add a blank after evey 4th digit
int x =( i >> j) &1;
repr = [[NSString stringWithFormat:#"%u",x] stringByAppendingString:repr];
}
NSLog(#"%#", repr);
}
}
return 0;
}
Output
0000 0000
0000 0001
0000 0010
0000 0011
0000 0100
0000 0101
0000 0110
0000 0111
0000 1000
0000 1001
0000 1010
0000 1011
0000 1100
0000 1101
0000 1110
0000 1111
0001 0000
…
1110 1100
1110 1101
1110 1110
1110 1111
1111 0000
1111 0001
1111 0010
1111 0011
1111 0100
1111 0101
1111 0110
1111 0111
1111 1000
1111 1001
1111 1010
1111 1011
1111 1100
1111 1101
1111 1110
1111 1111
The core of this program is this:
for (int i = 0; i < n; i++) {
//…
for (int j = 0; j < length; ++j) {
int x =( i >> j) &1;
//…
}
}
this will run for i =0 bis (2^n)-1 and in the inner for-loop for every j of the n bits, to check, if the least bit is 1, and append it to the representation string.
As you are a beginner, you probably dont know, what this means: int x =( i >> j) & 1;
>> shifts the bits of the left side integer by as many decimal places as the right side defines. and & 1 performs a bit wise addition
so for i == 3 and n == 8
3 as binary string representation
j = 0: 00000011 >> 0 -> 0000 0011
&0000 0001
-----------
00000 0001 -> 1 repr = 1
j = 1: 00000011 >> 1 -> 000 0001
&0000 0001
-----------
00000 0001 -> 1 repr = 11
j = 2: 00000011 >> 2 -> 00 0000
&0000 0001
-----------
00000 0000 -> 0 repr = 011
j = 3: 00000011 >> 3 -> 0 0000
&0000 0001
-----------
00000 0000 -> 0 repr = 0011
(the same till j = 7) repr = 0000 0011