Explanation of JVM bytecode - jvm

Can someone explain how the numbers alongside JVM Opcodes are calculated?
I think it is like 1 byte for the opcode and rest of the bytes for operands. Am I correct?
Example:
Method int add12and13()
0 bipush 12
2 bipush 13
4 invokestatic #3 // Method Example.addTwoStatic(II)I
7 ireturn

You are right. This is bytecode offset from the method beginning.
bipush has 1 byte parameter, so it totally takes 2 bytes.
invokestatic takes 3 bytes: opcode + 2 bytes for a constant pool index, that is, offset of the next instruction will be +3 bytes from this invokestatic.

Related

Difference between 1 and 1'b1 in Verilog

What is the difference between just giving 1 and giving 1'b1 in verilog code?
The 1 is 32 bits wide, thus is the equivalent of 32'b00000000_00000000_00000000_00000001
The 1'b1 is one bit wide.
There are several places where you should be aware of the difference in length but the one most likely to catch you out is in concatenations. {}
reg [ 7:0] A;
reg [ 8:0] B;
assign A = 8'b10100101;
assign B = {1'b1,A}; // B is 9'b110100101
assign B = {1,A}; // B is 9'b110100101
assign B = {A,1'b1}; // B is 9'b101001011
assign B = {A,1}; // B is 9'b000000001 !!!!
So, what's the difference between, say,
logic [7:0] count;
...
count <= count + 1'b1;
and
logic [7:0] count;
...
count <= count + 1;
Not a lot. In the first case your simulator/synthesiser will do this:
i) expand the 1'b1 to 8'b1 (because count is 8 bits wide)
ii) do all the maths using 8 bits (because now everything is 8 bits wide).
In the second case your simulator/synthesiser will do this:
i) do all the maths using 32 bits (because 1 is 32 bits wide)
ii) truncate the 32-bit result to 8 bits wide (because count is 8 bits wide)
The behaviour will be the same. However, that is not always the case. This:
count <= (count * 8'd255) >> 8;
and this:
count <= (count * 255) >> 8;
will behave differently. In the first case, 8 bits will be used for the multiplication (the width of the 8 in the >> 8 is irrelevant) and so the multiplication will overflow; in the second case, 32 bits will be used for the multiplication and so everything will be fine.
1'b1 is an binary, unsigned, 1-bit wide integral value. In the original verilog specification, 1 had the same type as integer. It was signed, but its width was unspecified. A tool could choose the width base on its host implementation of the int type.
Since Verilog 2001 and SystemVerilog 2005, the width of integer and int was fixed at 32-bits. However, because of this original unspecified width, and the fact that so many people write 0 or 1 without realizing that it is now 32-bits wide, the standard does not allow you to use an unbased literal inside a concatenation. {A,1} is illegal.

What is the minimum size of an address register for a computer with 5TB of memory?

There is this question that I'm having a bit a difficulty to answer
Here it is:
An n-bit register can hold 2^n distinct bit patterns. As such,
it can only be used to address a memory whose number of addressable units
(typically, bytes) is less than or equal to 2^n. In this question, register
sizes need not be a power of two. K = 2^10
a) What is the minimum size of an address register for a computer
with 5 TB of memory?
b) What is the minimum size of an address register for a computer
with 7 TBs of memory?
c) What is the minimum size of an address register for a computer
with 2.5 PBs of memory?
From the conversion, I know that:
1KB = $2^{10}$ bytes
1MB = $2^{20}$ bytes
1GB = $2^{30}$ bytes
1TB = $2^{40}$ bytes
If I convert 5TB into bytes we get 5,497,558,138,880 bytes
What would be the next step though? I know that 1 byte = 8 bits
This is how I would proceed:
1 TB = 2^40 bytes
Calculate the number of bytes in 5 TB = 5,497,558,138,880 bytes (assume this number is n);
The logarithmic function log(Base2)(n) = the minimum size of an address register and in this case it would be 42.321928095 bits which I would round up to 43 bits.
Same logic for the other questions.
I suggest you divide by 8.
5,497,558,138,880/8 = 687194767360
Using logarithms, 2^n = 687194767360 therefore log2(687194767360) = n
Therefore n = 39.321928095
The same steps can be used to achieve part b and c

How to divide a BCD by 2 on an 8085 processor?

On an 8085 processor, an efficient algorithm for dividing a BCD by 2 comes in handy when converting a BCD to binary representation. You might think of recursive subtraction or multiplying by 0.5, however these algorithms require lengthy arithmetics.
Therefore, I would like to share with you the following code (in 8085 assembler) that does it more efficiently. The code has been thoroughly tested on GNUSim8085 and ASM80 emulators. If this code was helpful to you, please share your experience with me.
Before running the code, put the BCD in register A. Set the carry flag if there is a remainder to be received from a more significant byte (worth 50). After execution, register A will contain the result. The carry flag is used to pass the remainder, if any, to the next less significant byte.
The algorithm uses DAA instruction after manipulating C and AC flags in a very special way thus taking into account that any remainder passed down to the next nibble (i.e. half-octet) is worth 5 instead of 8.
;Division of BCD by 2 on an 8085 processor
;Set initial values.
;Register A contains a two-digit BCD. Carry flag contains remainder.
stc
cmc
mvi a, 85H
;Do modified decimal adjust before division.
cmc
cma
rar
adc a
cma
daa
cmc
;Divide by 2.
rar
;Save quotient and remainder to registers B and C.
mov b, a
mvi a, 00H
rar
mov c, a
;Continue working on decimal adjust.
mov a, b
sui 33H
mov b, a
mov a, c
ral
mov a, b
hlt
Suppose a two digit BCD number is represented as:D7D6D5D4 D3D2D1D0
For a division by 2, for binary (or hex), simply right shift the number by one place. If there is an overflow then remainder is 1, and 0 othwerwise. The same things applies to two digit (8-bit) BCD numbers when D4 is 0, i.e. there is no effective bit shift from higher order four bits. Now if D4 is 1 (before the shift), then shifting will introduce a 8 (1000) in the lower order four bits, which apparantly jeopardizes this process. Observe that in BCD the bit shift should introduce 10/2 = 5 not 16/2 = 8. Thus we can simply adjust by subtrating 8-5 = 3 from the lower order four bits, i.e. 03H from the entire number. The following code summarizes this strategy. We assume accumulator holds the data, and after the division the result is kept in the accumulator and remainder is kept in the register B.
MVI B,00H ; remainder = 0
STC
CMC ; clear the carry flag
RAR ; right shift the data
JNC SKIP
INR B ; CY=1 so, remainder = 1
SKIP: MOV D,A ; backup
ANI 08H ; if get D3 after the shift (or D4 before the shift)
MOV A,D ; get the data from backup
JZ FIN ; if D4 before the shift was 0
SUI 03H ; adjustment for the shift
FIN: HLT ; A has the result, B has the remainder

Bitwise operators: How do I clear the most significat bit?

I'm working on a problem where I need to convert an integer into a special text encoding. The requirements state the I pack the int into bytes and then clear the most significant bit. I am using the bitwise operator I am unsure of how to clear the most significant bit. Here is the problem and my method that I'm working with so far:
PROBLEM:
For this task, you need to write a small program including a pair of functions that can
convert an integer into a special text encoding
The Encoding Function
This function needs to accept a signed integer in the 14-bit range [-8192..+8191] and return a 4 character string.
The encoding process is as follows:
Add 8192 to the raw value, so its range is translated to [0..16383]
2.Pack that value into two bytes such that the most significant bit of each is cleared
Unencoded intermediate value (as a 16-bit integer):
00HHHHHH HLLLLLLL
Encoded value:
0HHHHHHH 0LLLLLLL
1 of 3
Format the two bytes as a single 4-character hexadecimal string and return it.
Sample values:
Unencoded (decimal) | Intermediate (decimal) | Intermediate (hex) | Encoded (hex)
0 | 8192 | 2000 | 4000
-8192 | 0 | 0000 | 0000
8191 | 16383 | 3fff | 7F7F
2048 | 10240 | 2800 | 5000
-4096 | 4096 | 1000 | 2000
My function
-(NSString *)encodeValue{
// get the input value
int decValue = [_inputValue.text intValue];
char* bytes = (char*)&decValue;
NSNumber *number = #(decValue+8192); //Add 8192 so that the number can't be negative, because we're about to lose the sign.
u_int16_t shortNumber = [number unsignedShortValue]; //Convert the integer to an unsigned short (2 bytes) using NSNumber.
shortNumber = shortNumber << 1; // !!!! This is what I'm doing to clear the MSB !!!!!!!
NSLog(#"%hu", shortNumber);
NSString *returnString = [NSString stringWithFormat:#"%x", shortNumber]; //Convert the 2 byte number to a hex string using format specifiers
return returnString;
}
I'm using the shift bitwise operator to clear the MSB and I get the correct answer for a couple of the values, but not every time.
If I am understanding you correctly then I believe you are after something like this:
u_int16_t number;
number = 0xFFFF;
number &= ~(1 << ((sizeof(number) * 8) - 1));
NSLog(#"%x", number); // Output will be 7fff
How it works:
sizeof(number) * 8 gives you the number of bits in the input number (eg. 16 for a u_int16_t)
1 << (number of bits in number - 1) gives you a mask with only the MSB set (eg. 0x8000)
~(mask) gives you the bitwise NOT of the mask (eg. 0x7fff)
ANDing the mask with your number then clears only the MSB leaving all others as they were
You are misunderstanding your task.
You are not supposed to clear the most significant bit anywhere. You have 14 bits. You are supposed to separate these 14 bits into two groups of seven bits. And since a byte has 8 bits, storing 7 bits into a byte will leave the most significant bit cleared.
PS. Why on earth are you using an NSNumber? If this is homework, I would fail you for the use of NSNumber alone, no matter what the rest of the code does.
PS. What is this char* bytes supposed to be good for?
PS. You are not clearing any most significant bit anywhere. You have an unsigned short containing 14 significant bits, so the two most significant bits are cleared. You shift the number to the left, so the most significant bit, which was always cleared, remains cleared, but the second most significant bit isn't. And all this has nothing to do with your task.

How structure padding works?

My Questing is regarding structure padding? Can any one tell me what's logic behind structure padding.
Example:
structure Node{
char c1;
short s1;
char c2;
int i1;
};
Can any one tell me how structure padding will apply on this structure?
Assumption: Integer takes 4 Byte.
Waiting for the answer.
How padding works depends entirely on the implementation.
For implementations where you have a two-byte short and four-byte int and types have to be aligned to a multiple of their size, you will have:
Offset Var Size
------ ---- ----
0 c1 1
1 ?? 1
2 s1 2
4 c2 1
5 ?? 3
8 i1 4
12 next
An implementation is free to insert padding between fields of a structure and following the last field (but not before the first field) for any reason whatsoever. The ability to pad after a structure is important for aligning subsequent elements in an array. For example:
struct { int i1; char c1; };
may give you:
Offset Var Size
------ ---- ----
0 i1 4
4 c1 1
5 ?? 3
8 next
Padding is usually done because either aligned data works faster, or misaligned data is illegal (some CPU architectures disallow misaligned access).
There is no simple answer to this, except "It depends".
It could be as little as 8 bytes, assuming two byte shorts, or it could take 12 bytes, or it could take 42 bytes on a suitably bizarre implementation. It depends on at least the underlying architecture, the compiler and the compiler flags. Check your tool's manual for information.
Inside a struct, each member's offset in memory is based on their size and alignment. Note that this is implementation specific
E.g. if char takes 1 byte, short takes 2 bytes and int takes 4 bytes:
structure Node{
char c1; // 1 byte
// 1 byte padding (next member requires 2 byte alignment)
short s1; // 2 bytes
char c2; // 1 byte
// 3 bytes padding (since next member requires 4 byte alignment)
int i1; // 4 bytes
};
This also depends on your compiler settings and architecture, and can also be modified.
If you packed this structure properly (by rearranging the order of members), you could fit it into 8 bytes, not 12 bytes (by switching c2 with s1).
The reason for alignment enforcement is that the hardware can do certain operations faster with data that have a natural alignment; otherwise it would have to perform some bitmasking, shifting and ORing to construct the data before operating on it.