Issues with Thumb-2 Branch Instruction - branch

I'm currently creating an application that would take in the user's input and return to them the hex of the branch instruction they wanted.
The input includes:
Branch Type (Conditional/Non-Conditional)
If conditional, the condition
Address to branch to
Address branching from
I can currently fill in most bits of the 32 bits but I am unable to fill in three certain bits which bother me. These are the S bit, the J1 bit, and the J2 bit as shown in the references below:
Do these bits have default values? Otherwise, how do I know the value I need to use for them? Thanks for your time.

I searched the internet further until I found my answer:
Experiments indicate that the J1 and J2 bits are not used and the definition is:
imm32 = SignExtend(S:imm6:imm11:'0', 32);
So basically, J1 and J2 are not used and only the S bit is used. This makes sense as it works with my experiments as well. And the S bit seems to be the sign bit, which is equal to 1 when the address is negative, or equal to 0 when the address is positive.

Related

Why is the condition in this if statement written as a multiplication instead of the value of the multiplication?

I was reviewing some code from a library for Arduino and saw the following if statement in the main loop:
draw_state++;
if ( draw_state >= 14*8 )
draw_state = 0;
draw_state is a uint8_t.
Why is 14*8 written here instead of 112? I initially thought this was done to save space, as 14 and 8 can both be represented by a single byte, but then so can 112.
I can't see why a compiler wouldn't optimize this to 112, since otherwise it would mean a multiplication has to be done every iteration instead of the lookup of a value. This looks to me like there is some form of memory and processing tradeoff.
Does anyone have a suggestion as to why this was done?
Note: I had a hard time coming up with a clear title, so suggestions are welcome.
Probably to explicitly show where the number 112 came from. For example, it could be number of bits in 14 bytes (but of course I don't know the context of the code, so I could be wrong). It would then be more obvious to humans where the value came from, than wiriting just 112.
And as you pointed out, the compiler will probably optimize it, so there will be no multiplication in the machine code.

How 16 bit array needs 5 bit address (Xilinx Vivado HLS)?

I am novice in Xilinx HLS. I am following tutorial ug871-vivado-high-level-synthesis-tutorial.pdf(page 77).
The code is
#define N 32
void array_io (dout_t d_o[N], din_t d_i[N])
{
//..do something
}
After synthesis, I got report like
I am confused that how the width of the address port has been automatically sized match to the number of addresses that must be accessed (5-bit for 32 addresses)?
Please help.
From the UG871, it seems that the size of the array is from 0 to 16 samples, hence you need 32 addresses to access all values (see Figure 69). I guess that the number N is somewhere limited to be less than 32 (or be exactly 16). This means that Vivado knows this limitation, and generates only as many address bits as are needed. Most synthesis tools check the constraints on size and optimize unnecessary code away.
When you synthetise a function you create, also, some registers to store the variables. It means that the address that you put as input is the one of the data that you are concurrently writing in d_o or d_in.
In your case, where N=32, you have 32 different variables (in both input and output). To adress 32 different variables you need 32 different combination of bit (to point to a specific one, without ambiguity). With 5 bit you have 2^5=32 different combination of addresses: the minimum number of bit to address all your data.
For instance if you have 32
The address number of bit is INDIPENDENT from the size of data (i.e. they can be int, float, char, short, double, arbitrary precision and so on)

Trying to understand nbits value from stratum protocol

I'm looking at the stratum protocol and I'm having a problem with the nbits value of the mining.notify method. I have trouble calculating it, I assume it's the currency difficulty.
I pull a notify from a dogecoin pool and it returned 1b3cc366 and at the time the difficulty was 1078.52975077.
I'm assuming here that 1b3cc366 should give me 1078.52975077 when converted. But I can't seem to do the conversion right.
I've looked here, here and also tried the .NET function BitConverter.Int64BitsToDouble.
Can someone help me understand what the nbits value signify?
You are right, nbits is current network difficulty.
Difficulty encoding is throughly described here.
Hexadecimal representation like 0x1b3cc366 consists of two parts:
0x1b -- number of bytes in a target
0x3cc366 -- target prefix
This means that valid hash should be less than 0x3cc366000000000000000000000000000000000000000000000000 (it is exactly 0x1b = 27 bytes long).
Floating point representation of difficulty shows how much current target is harder than the one used in the genesis block.
Satoshi decided to use 0x1d00ffff as a difficulty for the genesis block, so the target was
0x00ffff0000000000000000000000000000000000000000000000000000.
And 1078.52975077 is how much current target is greater than the initial one:
$ echo 'ibase=16;FFFF0000000000000000000000000000000000000000000000000000 / 3CC366000000000000000000000000000000000000000000000000' | bc -l
1078.52975077482646448605

Accepting user input for a variable

So, this should be an easy question for anyone who has used FORTH before, but I am a newbie trying to learn how to code this language (and this is a lot different than C++).
Anyways, I'm just trying to create a variable in FORTH called "Height" and I want a user to be able to input a value for "Height" whenever a certain word "setHeight" is called. However, everything I try seems to be failing because I don't know how to set up the variable nor how to grab user input and put it in the variable.
VARIABLE Height 5 ALLOT
: setHeight 5 ACCEPT ATOI CR ;
I hope this is an easy problem to fix, and any help would be greatly appreciated.
Thank you in advance.
Take a look at Rosettacode input/output examples for string or number input in FORTH:
String Input
: INPUT$ ( n -- addr n )
PAD SWAP ACCEPT
PAD SWAP ;
Number Input
: INPUT# ( -- u true | false )
0. 16 INPUT$ DUP >R
>NUMBER NIP NIP
R> <> DUP 0= IF NIP THEN ;
A big point to remember for your self-edification -- C++ is heavily typecasted, Forth is the complete opposite. Do you want Height to be a string, an integer, or a float, and is it signed or unsigned? Each has its own use cases. Whatever you choose, you must interact with the Height variable with the type you choose kept in mind. Think about what your bits mean every time.
Judging by your ATOI call, I assume you want the value of Height as an integer. A 5 byte integer is unusual, though, so I'm still not certain. But here goes with that assumption:
VARIABLE Height 1 CELLS ALLOT
VARIABLE StrBuffer 7 ALLOT
: setHeight ( -- )
StrBuffer 8 ACCEPT
DECIMAL ATOI Height ! ;
The CELLS call makes sure you're creating a variable with the number of bits your CPU prefers. The DECIMAL call makes sure you didn't change to HEX somewhere along the way before your ATOI.
Creating the StrBuffer variable is one of numerous ways to get a scratch space for strings. Assuming your CELL is 16-bit, you will need a maximum of 7 characters for a zero-terminated 16-bit signed integer -- for example, "-32767\0". Some implementations have PAD, which could be used instead of creating your own buffer. Another common word is SCRATCH, but I don't think it works the way we want.
If you stick with creating your own string buffer space, which I personally like because you know exactly how much space you got, then consider creating one large buffer for all your words' string handling needs. For example:
VARIABLE StrBuffer 201 ALLOT
This also keeps you from having to make the 16-bit CELL assumption, as 200 characters easily accommodates a 64-bit signed integer, in case that's your implementation's CELL size now or some day down the road.

How can I use SYNCSORT to format a Packed Decimal field with a specifc sign value?

I want to use SYNCSORT to force all Packed Decimal fields to a negative sign value. The critical requirement is the 2nd nibble must be Hex 'D'. I have a method that works but it seems much too complex. In keeping with the KISS principle, I'm hoping someone has a better method. Perhaps using a bit mask on the last 4 bits? Here is the code I have come up with. Is there a better way?
*
* This sort logic is intended to force all Packed Decimal amounts to
* have a negative sign with a B'....1101' value (Hex 'xD').
*
SORT FIELDS=COPY
OUTFIL FILES=1,
INCLUDE=(8,1,BI,NE,B'....1..1',OR, * POSITIVE PACKED DECIMAL
8,1,BI,EQ,B'....1111'), * UNSIGNED PACKED DECIMAL
OUTREC=(1:1,7, * INCLUDING +0
8:(-1,MUL,8,1,PD),PD,LENGTH=1,
9:9,72)
OUTFIL FILES=2,
INCLUDE=(8,1,BI,EQ,B'....1..1',AND, * NEGATIVE PACKED DECIMAL
8,1,BI,NE,B'....1111'), * NOT UNSIGNED PACKED DECIMAL
OUTREC=(1:1,7, * INCLUDING -0
8:(+1,MUL,8,1,PD),PD,LENGTH=1,
9:9,72)
In the code that processes the VSAM file, can you change the read logic to GET with KEY GTEQ and check for < 0 on the result instead of doing a specific keyed read?
If you did that, you could accept all three negative packed values xA, xB and xD.
Have you considered writing an E15 user exit? The E15 user exit lets you
manipulate records as they are input to the sort process. In this case you would have a
REXX, COBOL or other LE compatible language subroutine patch the packed decimal sign field as it is input to the sort process. No need to split into multiple files to be merged later on.
Here is a link to example JCL
for invoking an E15 exit from DFSORT (same JCL for SYNCSORT). Chapter 4 of this reference
describes how to develop User Exit routines, again this is a DFSORT manual but I believe SyncSort is
fully compatible in this respect. Writing a user exit is no different than writing any other subroutine - get the linkage right and the rest is easy.
This is a very general outline, but I hope it helps.
Okay, it took some digging but NEALB's suggestion to seek help on MVSFORUMS.COM paid off... here is the final result. The OUTREC logic used with SORT/MERGE replaces OUTFIL and takes advantage of new capabilities (IFTHEN, WHEN and OVERLAY) in Syncsort 1.3 that I didn't realize existed. It pays to have current documentation available!
*
* This MERGE logic is intended to assert that the Packed Decimal
* field has a negative sign with a B'....1101' value (Hex X'.D').
*
*
MERGE FIELDS=(27,5.4,BI,A),EQUALS
SUM FIELDS=NONE
OUTREC IFTHEN=(WHEN=(32,1,BI,NE,B'....1..1',OR,
32,1,BI,EQ,B'....1111'),
OVERLAY=(32:(-1,MUL,32,1,PD),PD,LENGTH=1)),
IFTHEN=(WHEN=(32,1,BI,EQ,B'....1..1',AND,
32,1,BI,NE,B'....1111'),
OVERLAY=(32:(+1,MUL,32,1,PD),PD,LENGTH=1))
Looking at the last byte of a packed field is possible. You want positive/unsigned to negative, so if it is greater than -1, subtract it from zero.
From a short-lived Answer by MikeC, it is now known that the data contains non-preferred signs (that is, it can contain A through F in the low-order half-byte, whereas a preferred sign would be C (positive) or D (negative). F is unsigned, treated as positive.
This is tested with DFSORT. It should work with SyncSORT. Turns out that DFSORT can understand a negative packed-decimal zero, but it will not create a negative packed-decimal zero (it will allow a zoned-decimal negative zero to be created from a negative zero packed-decimal).
The idea is that a non-preferred sign is valid and will be accurately signed for input to a decimal machine instruction, but the result will always be a preferred sign, and will be correct. So by adding zero first, the field gets turned into a preferred sign and then the test for -1 will work as expected. With data in the sign-nybble for packed-decimal fields, SORT has some specific and documented behaviours, which just don't happen to help here.
Since there is only one value to deal with to become the negative zero, X'0C', after the normalisation of signs already done, there is a simple test and replacement with a constant of X'0D' for the negative zero. Since the negative zero will not work, the second test is changed from the original minus one to zero.
With non-preferred signs in the data:
SORT FIELDS=COPY
INREC IFTHEN=(WHEN=INIT,
OVERLAY=(32:+0,ADD,32,1,PD,TO=PD,LENGTH=1)),
IFTHEN=(WHEN=(32,1,CH,EQ,X'0C'),
OVERLAY=(32:X'0D')),
IFTHEN=(WHEN=(32,1,PD,GT,0),
OVERLAY=(32:+0,SUB,32,1,PD,TO=PD,LENGTH=1))
With preferred signs in the data:
SORT FIELDS=COPY
INREC IFTHEN=(WHEN=(32,1,CH,EQ,X'0C'),
OVERLAY=(32:X'0D')),
IFTHEN=(WHEN=(32,1,PD,GT,0),
OVERLAY=(32:+0,SUB,32,1,PD,TO=PD,LENGTH=1))
Note: If non-preferred signs are stuffed through a COBOL program not using compiler option NUMPROC(NOPFD) then results will be "interesting".